report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
Executive Order 12866, issued on September 30, 1993, is administered by OIRA and is intended to enhance regulatory planning and coordination with respect to both new and existing regulations. Section 5 of the executive order required agencies to submit to OIRA by December 31, 1993, a program for periodically reviewing their existing significant regulations to determine whether any should be modified or eliminated. According to the executive order, the purpose of the review was to make the agencies’ regulatory programs more effective, less burdensome, or better aligned with the President’s priorities and the principles specified in the order. There have been several previous requirements that federal agencies review their existing regulations. For example, in 1979, President Carter issued Executive Order 12044, which required agencies to review their existing rules “periodically.” The Regulatory Flexibility Act of 1980 required agencies to publish in the Federal Register a plan for the periodic review of rules that “have or will have a significant economic impact upon a substantial number of small entities.” In 1992, President Bush sent a memorandum to all federal departments and agencies calling for a 90-day moratorium on new proposed or final rules during which agencies were “to evaluate existing regulations and programs and to identify and accelerate action on initiatives that will eliminate any unnecessary regulatory burden or otherwise promote economic growth.” “It is important to emphasize what the lookback effort is and is not. It is not directed at a simple elimination or expunging of specific regulations from the Code of Federal Regulations. Nor does it envision tinkering with regulatory provisions to consolidate or update provisions. Most of this type of change has already been accomplished, and the additional dividends are unlikely to be significant. Rather, the lookback provided for in the Executive Order speaks to a fundamental reengineering of entire regulatory systems. . . .” On March 4, 1995, President Clinton sent a memorandum to the heads of departments and agencies describing plans for changing the federal regulatory system because “not all agencies have taken the steps necessary to implement regulatory reform.” Among other things, the President directed each agency to conduct a page-by-page review of all its regulations in force and eliminate or revise those that were outdated or in need of reform. In June 1995, 28 agencies provided reports to the President describing the status of their regulatory reform efforts, often noting the number of pages of federal regulations that would be eliminated or revised. On June 12, 1995, the President announced that the page-by-page review effort had resulted in commitments to eliminate 16,000 pages from the 140,000-page CFR and modify another 31,000 pages either through administrative or legislative means. In a December 1996 report to the President, the OMB Director and the OIRA Administrator said that agencies had made “significant progress toward fulfilling these commitments” but recognized that more work remained to be done. They said that despite the addition of new regulations while regulations were being eliminated, the CFR was about 5,000 pages smaller at the end of the first three quarters of 1996 than it had been a year earlier. The report went on to say that agencies had revised or proposed to revise nearly 20,000 pages of the CFR. A detailed explanation of our scope and methodology is in appendix I. To address our first objective of determining whether agencies’ page elimination totals accounted for pages added, we obtained CFR page elimination and revision totals as of April 30, 1997, from OIRA and interviewed agency officials at HUD, DOT, OSHA, and EPA. The officials said that the elimination totals did not include pages added while the eliminations occurred. They also said that their agencies had not been required to count the number of pages added during this period and that it would be extremely difficult for them to identify and provide an accurate count of those additions as part of this review because some of the regulatory actions had taken place early in the initiative. As discussed with your office, in order to gauge the effect of CFR page additions without imposing a major burden on the agencies, we asked the agencies to count the number of pages added for those actions that they believed had increased their sections of the CFR by five or more pages and that occurred during the same periods that they said they had eliminated CFR pages. Three of the four agencies also compared editions of their sections of the CFR near the beginning and end of their page elimination initiatives to determine net page changes and page additions. Using both these estimated page additions and reported eliminations, we calculated the net increase or decrease in each agency’s CFR page totals. (See appendix I for more detail on how the agencies estimated the number of CFR pages added.) To address our second objective of determining whether agencies’ CFR revision actions would reduce regulatory burden, we reviewed descriptions of all 422 such actions in the 4 agencies that appeared in at least 1 edition of the Unified Agenda of Federal Regulatory and Deregulatory Actions between October 1995 and April 1997. We initially reviewed descriptive abstracts for these actions that were included in the Unified Agenda. However, many of the abstracts did not clearly indicate what actions were being proposed. In those cases, we attempted to obtain additional information about the actions from any related proposed or final rules printed in the Federal Register. If more information was still needed, we contacted agency officials. We used all of the information available to assess what effect the initiatives were likely to have on regulated entities (e.g., individuals, private companies, state or local governments, or federal agencies other than the issuing agency). We coded each action into one of the following five categories: (1) substantive burden reduction (e.g., eliminating paperwork and other requirements, giving regulated entities more flexibility in how they can comply with or implement the rule, or lowering compliance costs); (2) minor burden reduction (e.g., clarifying the language in the CFR to make it easier to read or understand or combining existing sections of the CFR to make the requirements easier to find); (3) burden increase (e.g., adding reporting requirements, requiring additional training or testing procedures, or expanding the scope of a regulation to new entities); (4) no burden change (e.g., eliminating obsolete or duplicative regulations, establishing a committee to study an issue, or changing requirements that will primarily affect the agency promulgating the regulation); or (5) cannot tell (e.g., actions that had multiple parts that could potentially offset each other or were unclear as to their effect on the regulated entities). Each of the 422 actions was reviewed by several different members of our staff, including those with extensive subject matter expertise, to help ensure validity and consistency of judgment in assessing the impact of the actions on regulatory entities. Agency officials were given an opportunity to review and comment on our assessment of all the actions during the assignment, and their comments were taken into consideration in making our final determinations about the actions’ effect on regulatory burden. To address our third objective of determining whether the administration had any mechanisms in place to measure burden reductions as a result of the CFR page elimination and revision initiatives, we interviewed officials at OIRA. We did not verify the agencies’ CFR page elimination totals or their page addition estimates. The agencies’ estimates of the pages added to the CFR may not include all added pages because, in response to agency concerns about the effort it would take to count all pages, we agreed that the agencies could exclude any action that added less than five pages. Also, although we validated our judgments about the possible effect of the proposed changes by using multiple judges and consulting with knowledgeable members of our staff and agency officials, some of our assessments were based on relatively little information. We did not differentiate between actions in terms of scope of the effort involved (e.g., whether the action would affect many or only a few regulated entities). Finally, we did not render a judgment regarding the wisdom of any of the CFR revision actions, only whether they would affect the burden felt by regulated entities. We conducted our work at OMB, HUD, DOT, OSHA, and EPA headquarters in Washington, D.C., between February 1997 and September 1997 in accordance with generally accepted government auditing standards. We made a draft of this report available to the Director of OMB, the Secretaries of HUD, Labor, and DOT, and the Administrator of EPA for their review and comment. Their comments are discussed at the end of this letter. Any analysis of the effect of reductions in the number of pages of regulatory text must initially recognize that one sentence of a regulation can impose more burden than 100 pages of regulations that are administrative in nature. Therefore, the number of pages eliminated from the CFR is, at best, an indirect measure of burden reduction. Nevertheless, the number of CFR pages eliminated is one of the measures that the administration is using to gauge its own efforts. As of April 30, 1997, 15 agencies reported to OMB that they had eliminated 79 percent, or more than 13,000, of the 16,627 pages they had targeted for elimination. The 4 agencies that we examined reported that they had eliminated a total of 5,532 (85 percent) of the 6,529 pages they had targeted. However, officials at each of those four agencies told us that these page elimination totals did not include the pages that they had added to their parts of the CFR at the same time that pages were being removed. As table 1 shows, after taking into account the 4 agencies’ estimates of the major CFR page additions that were made during the same period that pages were eliminated, the agencies’ CFR sections decreased in size by about 926 pages—about 3 percent of their total CFR pages at the start of their initiatives and about 17 percent of the 5,532-page elimination total that had been reported to OIRA by these agencies. The effect of accounting for pages added to the CFR varied across the four agencies. EPA and DOT estimated they added more pages to the CFR than they removed during their page elimination initiatives. As a result, the size of their CFR sections increased by an estimated 966 and 283 pages, respectively. HUD and OSHA, on the other hand, estimated they deleted more pages than they added during their initiatives, so the size of their CFR sections decreased. Figure 1 depicts the result of the CFR page elimination effort in each agency both before (gross) and after (net) accounting for estimated CFR page additions. Agency officials said there are a number of reasons why pages are added to or kept in the CFR, many of which are beyond the agencies’ control or are beneficial to regulated entities. The officials frequently said that statutory requirements imposed by Congress often drive CFR page additions. For example, an EPA official said that the growth in the number of their CFR pages was primarily driven by statutory requirements to develop new Clean Air Act regulations. A HUD official estimated that the agency added about 18 pages to the CFR in 1996 with the regulations implementing the Community Development Block Grants for Indian Tribes and Alaska Native Villages. According to HUD, the “principal impetus for this rulemaking process was the need to implement various statutory mandates included in Section 105 of the Department of Housing and Urban Development Reform Act (P.L. 101-235) as amended by the National Affordable Housing Act of 1990.” The official also said HUD added more than eight pages to the CFR in 1995 as a result of a rule implementing the Base Closure Community Redevelopment and Homeless Assistance Act of 1994 (P.L. 103-421). DOT officials said that all of the CFR page increases in the Federal Highway Administration and the Federal Transit Administration and the bulk of the increases in other parts of DOT were statutorily mandated. For example, they said that the National Highway Traffic Safety Administration (NHTSA) added 68 pages in response to congressional mandates contained in the Intermodal Surface Transportation Efficiency Act and the American Automobile Labeling Act. They said that DOT’s Office of the Secretary added 18 pages of rules to set out procedures for statutorily mandated alcohol testing of “safety-sensitive” employees. Agency officials also said that pages are sometimes added to the CFR in order to clarify regulatory requirements. For example, DOT officials said that they have added charts and examples to clearly illustrate how regulated entities can comply with their rules. Also, they said that in future regulations, they plan to incorporate question-and-answer formats and checklists to assist regulated entities. Therefore, they said the additional pages actually decrease the burden imposed on those entities. EPA officials pointed out that pages are often added to the CFR that permit, not restrict, actions by other entities. For example, they said that pages are added to allow farmers to use new pesticides and expand the use of existing pesticides on food crops. Without those regulations, which establish the allowable levels of pesticide residues in food crops, use of the pesticides would be prohibited. Finally, agency officials said that pages are sometimes not eliminated from the CFR as a result of requests from regulated entities. For example, DOT officials said they had proposed streamlining the procedures regarding marine industry manufacturers’ use of independent laboratories instead of the Coast Guard to inspect lights and fog signal emitters. However, according to the officials, the “project was withdrawn due to substantial issues raised by public comments. . . .” Overall, DOT officials said that CFR page counts are not always an accurate proxy for regulatory burden. For example, they noted that the size of CFR typeface or the format used periodically changes, each of which can have a big impact on the number of CFR pages. They also said that after a rule is published there is usually a period before it goes into effect in which both the old and the new rules are published. Finally, they said that editorial notes are added by the Office of the Federal Register when publishing the CFR, which increases the number of pages. As figure 2 shows, about 40 percent of the 422 CFR revision actions in the 4 agencies appeared to substantively reduce the burden felt by regulated entities through such actions as eliminating paperwork requirements and providing compliance flexibility. Another 15 percent were minor burden reductions in that they made regulatory requirements easier to find or to understand but did not change the rules’ underlying requirements or scope of applicability. Therefore, taking these two categories together, about 55 percent of the CFR revision actions appeared to reduce the level of regulatory burden to at least some extent. However, about 8 percent of the actions seemed to increase regulatory burden, and another 27 percent did not appear to affect regulated entities’ burden. We were unable to determine what, if any, impact about 9 percent of the actions would have on regulatory burden. (The numbers do not add to 100 percent due to rounding.) As table 2 shows, there were some differences across the agencies in the degree to which their CFR revision actions appeared to affect regulatory burden. For example, our analysis indicated that about 11 percent of the OSHA actions could be substantive reductions in regulatory burden but that more than 50 percent of the EPA actions appeared to be so. Conversely, nearly 37 percent of the DOT actions did not appear to change regulated entities’ burden compared with about 11 percent at OSHA. The 170 CFR revision actions that appeared to substantively reduce regulatory burden took a number of different forms, including reducing paperwork or other requirements, giving the regulated entities flexibility in how to comply with or implement the regulations, lowering compliance costs, and/or allowing the regulated entities to file or transmit reports electronically. About half (90) of these actions appeared to reduce burden by eliminating paperwork and/or other requirements. For example, one EPA action proposed changing the frequency with which states must submit information related to state water quality standards under section 303(d) of the Clean Water Act from every 2 years to every 5 years. Lessening the frequency with which this information must be submitted should reduce the paperwork burden imposed on the states. One HUD action proposed to reorganize six separate grant programs into a single formula-based program, eliminating the need for both annual notices of funding availability and annual submission of applications. Also, by consolidating these programs into one program, HUD expected that the reporting and recordkeeping requirements would be dramatically reduced as grantees would only be required to maintain records on one program. Another HUD action would allow the use of classes of innovative products without having each manufacturer apply for a material release for a specific product. HUD said these changes would save suppliers and manufacturers thousands of dollars in application fees and materials preparation. About half (86) of the 170 CFR actions appeared to reduce regulated entities’ burden by giving them more flexibility in how they comply with or implement the regulations. For example, the Federal Aviation Administration (FAA) proposed revising “the Federal Aviation Regulations to provide for the granting of relief from the literal compliance with certain rules,” provided the applicant justified this relief and FAA concluded that the provisions not complied with had no adverse impact on safety or were compensated for by other factors. FAA also revised its regulations governing portable protective breathing equipment that is required for crew members’ use in combatting in-flight fires, eliminating the requirement that airlines have portable equipment in each compartment and giving the airlines flexibility in the number and placement of this equipment in the aircraft. In another example, EPA said it revised its regulations for municipal solid waste landfills to allow local governments greater flexibility to demonstrate compliance with financial assurance requirements. Other examples of agencies’ actions that appeared to result in substantive burden reduction for the regulated entities included the following: OSHA proposed revising the shipyard employment safety standards regarding safety systems and work practices for entering and exiting the workplace, eliminating many provisions that limit employer innovation. According to the notice of proposed rulemaking, OSHA expected that regulated entities’ costs would decrease if employers could use alternative safety systems and work practices that were not allowed by the existing requirements. HUD revised its rules concerning the Board of Contract Appeals to make the Board’s actions less costly and time-consuming to appellants, including allowing appellants to use expedited small claims procedures, raising the threshold for using accelerated procedures in claims from $10,000 to $50,000, and advising claimants of the availability of alternative dispute resolution techniques. This action made revisions required by the Federal Acquisition Streamlining Act of 1994, which amended the Contract Disputes Act of 1978. DOT proposed allowing airlines to electronically file tariff rules governing the availability of passenger fares and their conditions, which they said would save the airline industry over a million dollars in tariff submissions, printing, and distribution costs. EPA said it would propose modifying its pesticide experimental use permit regulations to permit expanded testing without a permit, reducing burden on pesticide producers. According to agency officials, some of these actions to reduce regulatory burden were statutorily mandated. For example, DOT said two of its actions giving states additional flexibility implements “a statutory requirement that directs the Secretary of Transportation to issue regulations. . . .” HUD said that it revised certain regulations in part “to incorporate the statutory amendments in the Housing and Community Development Act of 1992.” Our analysis indicated that about 15 percent of the 4 agencies’ CFR revision actions (65 of the 422 actions) would result in minor reductions in regulated entities’ burden. These minor burden reductions included actions that made rules easier to understand (e.g., writing rules with less technical jargon) or easier to find (e.g., consolidating related sections of the CFR into one section) but did not change the regulations’ underlying requirements. CFR revision actions that we considered minor burden reduction actions included the following: HUD consolidated its fair housing and equal opportunity requirements for its programs. In addition to eliminating redundancy from title 24 of the CFR, HUD said that this action makes its nondiscrimination regulations more concise and simpler to understand. OSHA proposed consolidating its general industry standards (29 C.F.R. 1910) with its shipyard employment standards (29 C.F.R. 1915) into one comprehensive CFR part that would apply to all activities and areas in shipyards. The implementation of this action should make it easier for regulated entities to find and comply with all relevant OSHA standards for shipyards. In another action, OSHA proposed to “eliminate the complexity, duplicative nature, and obsolescence” of certain standards and “write them in plain language.” OSHA said that this change would improve comprehension and compliance with those standards. EPA proposed reorganizing and reformatting its national primary drinking water regulations to make them easier for public water system officials to understand and comply with and easier for state, local, and tribal governments to implement. DOT’s Office of the Secretary proposed reorganizing the regulations governing the conduct of all aviation economic proceedings, streamlining the regulations to remove redundancies, grouping procedures relating only to oral evidentiary hearings together and separating them from procedures pertaining to only nonhearing cases, and updating terminology in the regulations. Our review also identified 34 CFR revision actions (about 8 percent of the 422 actions) that appeared to increase regulatory burden by expanding the scope of existing regulations, establishing new programs and/or new requirements, creating more paperwork, or increasing costs for regulated entities. Actions that our analysis indicated would increase regulatory burden included the following: NHTSA proposed updating its lists of passenger motor vehicle insurers that are required to annually file reports on their motor vehicle theft loss experiences. As a result of this rule, NHTSA indicated that the number of insurers who must file these annual reports would increase, resulting in a cost increase to insurers of “less than $100,000.” In another action, DOT’s Research and Special Programs Administration proposed extending the application of its interstate hazardous materials regulations to intrastate transportation of those materials in commerce. OSHA proposed revising its general industry safety standard for training powered industrial truck operators and to add equivalent training requirements for the maritime industries. The new standards require periodic evaluation of each operator’s performance and periodic refresher or remedial training. OSHA estimated that the annualized cost would be $19.4 million. EPA proposed establishing “new source performance standards and emission guidelines for new and existing solid waste incineration units.” The new standards were to “specify numerical emission limitations” for 11 substances and were to include “requirements for emissions and parameter monitoring and provisions for operator training and certification.” HUD proposed to extend the applicability of its standards for approval of sites based on avoidance of minority/racial concentration for HUD-assisted rental housing to the Community Development Block Grant Program and to broaden the standards to include reviews of poverty concentration. For many of the actions that appeared to increase regulatory burden, we found that the burden increase was the result of agencies’ implementation of legislative requirements. For example, EPA officials noted that although the previously cited new source requirements may increase regulatory burden, the new rules were required by section 129 of the Clean Air Act, as amended in 1990. One HUD action proposed establishing new regulations implementing the Secretary of HUD’s authority to regulate Government Sponsored Enterprises (GSE) (e.g., the Federal National Mortgage Association and the Federal Home Loan Mortgage Corporation) under the Federal Housing Enterprises Financial Safety and Soundness Act of 1992. According to the final rule in the December 1, 1995, Federal Register, this act “substantially overhauled the regulatory authorities and structure for GSE regulation and required the issuance of this rule.” Our analysis indicated that about 27 percent of the 4 agencies’ CFR revision actions (114 of 422) would have little or no effect on the amount of burden felt by regulated entities. More than half of these actions involved the elimination of CFR “deadwood,” such as regulations that the agencies said were obsolete or were duplicative of other text. Other such actions were minor technical corrections, such as changes to agency organization charts, telephone listings, or addresses. The following examples illustrate agencies’ actions that appeared to have little to no effect on regulated entities’ burden: DOT proposed amending the Transportation Acquisition Regulations to change organizational names (e.g., “OST—Office of the Secretary” was replaced by “TASC-Transportation Administrative Service Center”) and renumber or rename certain sections of the CFR. HUD proposed removing the detail in its program regulations regarding the application and grant award processes, noting that a full description of the application and grant award process would instead be published in the Federal Register in a notice of funding availability. In several instances, the agencies’ actions appeared more likely to affect the promulgating agencies than the amount of burden felt by the regulated entities. For example, HUD proposed amending its rule on rules to make possible the “more timely implementation of new and changed policies of the Department in circumstances where notice and comment rulemaking is not required by law.” According to HUD, one of the purposes of this action was to provide greater flexibility to the Department in implementing statutory and other changes to its program authorities. In another such action, HUD issued revised ethics standards for its employees in accordance with the revised standards issued by the Office of Government Ethics. Several of the actions did not appear to affect the level of burden felt by regulated entities because the agency was only proposing to study an issue, and no specific proposal had been put forward at the time the action was described. For example, one HUD action was a joint proposal with the Federal Reserve Board “to initiate fact-finding to assist the agencies in revising disclosures to consumers under the Real Estate Settlement Procedures Act and the Truth in Lending Act.” According to HUD, the agencies were soliciting comments on what regulatory and legislative changes might be made to achieve consumer protection goals and minimally affect compliance burdens. OSHA said in one of its CFR revision abstracts that it intended to issue a proposal to prevent accidents during equipment repair and maintenance for the construction industry. However, an OSHA official told us that no specific proposal would be issued until 1999. EPA said in one action that it was initiating a technical review of the possible risks associated with management of silver-bearing wastes. However, no specific proposal was presented. A few of the actions that the agencies characterized as CFR revisions will have no effect on the burden felt by regulated entities because the agencies withdrew the proposal after receiving public comments. For example, in one action, FAA said it withdrew a proposal to clarify or change the number of flight attendants required when passengers are on an airplane “in view of the opposition and alternative proposals presented by a number of commenters.” As noted previously, we attempted to obtain additional information from the Federal Register and/or the agencies about each of the 422 CFR revision abstracts that seemed unclear. Although we were able to resolve many of the cases with this additional information, we were still unable to determine the effect that 39 of the 422 actions (about 9 percent) would have on regulated entities. In 23 of the 39 cases, the abstract and/or any supplementary information indicated that the CFR revision action had some elements that would increase burden and other elements that would reduce burden, making it difficult to determine the net effect. Those potentially offsetting cases included the following: One OSHA abstract stated that the agency was writing the final rule on standards for walking and working surfaces and personal fall protection systems “in plain language” and making it “flexible in the means of compliance permitted.” These elements appeared likely to reduce regulatory burden. However, in another part of the same abstract, OSHA indicated that criteria for personal fall protection systems would be added to the regulations because the existing standards did not contain those criteria—an action that could increase burden. In one DOT abstract, DOT proposed revising and updating the aviation insurance requirements “to recapture administrative expenses incurred,” which could represent a burden increase on the regulated entities. However, the abstract also said that the action “will clarify the language and make it conform with the current legislative language and intent,” which could reduce regulatory burden. For 17 of the 39 actions, we were unable to obtain enough information to make a determination. Examples of those actions include the following: One abstract stated that EPA would “make over 50 modifications, additions, and deletions to the existing PCB management program under the Toxic Substances Control Act. . . .” However, no details on those changes were available from either the Federal Register or EPA. One OSHA abstract indicated that a negotiated rulemaking process led to a draft revision of its regulation that contained “innovative provisions” that would help “minimize the major causes of steel erection injuries and fatalities .” However, OSHA could provide no additional information about the draft revisions. One DOT abstract proposed amending the “procedural regulations for the certification of changes to type certificated products.” The abstract stated that the “amendments are needed to accommodate the trend toward fewer products that are of completely new design and more products with repeated changes of previously approved designs.” Although this action appeared to propose reducing the regulatory requirements for manufacturing products of previously approved designs, it was unclear from this abstract exactly what the new procedures would be or their impact on the regulated entities, and DOT did not provide additional clarification. Section 5 of Executive Order 12866 required agencies to submit to OIRA a program to review their existing regulations. The first listed purpose for this review in the executive order is “to reduce the regulatory burden on the American people.” However, OIRA officials told us that the administration does not have any mechanism in place to measure changes in regulatory burden as a result of agencies’ CFR page elimination and CFR revision initiatives. They said that the agencies’ accomplishments in these areas “result from a wide variety of actions” and that there is “no single common measure that can be used to summarize the beneficial impact of this initiative given the breadth of activities it has encompassed.” OIRA officials went on to note that some of the actions in the initiative were significant rulemakings for which agencies conducted benefit-cost analyses; some were actions to make current regulations more user-friendly, and others were described as modest “housekeeping” actions designed to consolidate or eliminate certain provisions. Overall, they said that these efforts “have contributed to a more efficient and effective regulatory system.” They also noted that the elimination and revision actions are part of a larger set of initiatives designed to reform the nation’s regulatory system. Measuring regulatory burden and changes in that burden are extremely difficult. Some commenters (including the President) have used relatively simple indicators, such as the number of pages in the CFR or the total weight of the rules. Other observers have characterized federal regulatory burden in terms of federal spending on regulatory programs or the number of federal employees assigned to regulatory activities. Others have used the number of hours required to fill out federal paperwork. Still others have tried to measure the cost borne by entities responsible for complying with federal regulations. All of these measures have certain advantages and disadvantages, and all require careful interpretation. In a previous report, we concluded that it was extremely difficult to determine direct, incremental regulatory costs, even for an individual business. Indirect effects of regulation, such as their effect on productivity or competitiveness, and effects on all regulated entities are even more difficult to measure. Trying to gauge other types of regulatory burden (e.g., complexity, reasonableness) and then merge them with the other burden measures further complicates the task. Therefore, in some ways it is not surprising that the administration does not have a mechanism in place to measure burden reductions as a result of its CFR page elimination and revision initiative. However, in the absence of an agreed upon and demonstrably valid measure of regulatory burden, disagreements are likely to continue regarding the effectiveness of the page elimination and revision effort as well as other initiatives designed to lessen the impact of federal regulations. We sent a draft of this report to the Director of OMB; the Secretaries of HUD, Labor, and DOT; and the Administrator of EPA. Officials from OMB said they had no comments on the report. Officials from the other four agencies said that they generally agreed with our characterization of their page elimination efforts. Officials from DOT, HUD, and EPA also said that they generally agreed with the information presented about their CFR page revision efforts. However, for a few of the actions, they provided additional information regarding the effect of the actions on regulatory burden. Using this information, we reevaluated our conclusions regarding these actions and in some cases changed our burden determinations. On September 12, 1997, we received written comments on the draft report from the Department of Labor’s Acting Assistant Secretary for Occupational Safety and Health. (See app. II for a copy of those comments.) He said that he had serious concerns about the methodology we used to determine whether OSHA’s page revisions had resulted in reductions in regulatory burden. First, he said that simply counting the number of actions in each burden category does not accurately reflect OSHA’s efforts because the agency combined many separate deregulatory actions into several large packages. Because each package affected many different regulations, he said it was not appropriate to treat them as a single action. The Acting Assistant Secretary also said that the methodology used in the report does not take into account the complex interrelationships between factors within a single action that will both increase and decrease regulatory burden. Finally, he said that by describing their efforts to remove CFR pages and make rules easier to understand as “minor burden reductions,” the report does not give OSHA adequate credit and understates both the degree of improvement and their importance in the overall regulatory program. The Acting Assistant Secretary’s observations regarding aggregated deregulatory actions are grounded in a different view from ours about how to conduct this study. We gave each of the agencies’ proposals equal weight because we believed it was the most objective method to quantify our results. Any other method would have required us to make subjective judgments concerning both the identification of discrete proposals and the weight each proposal should be given. Criteria for such judgments are not readily available. Also, it is important to recognize that OSHA determined how its CFR revision actions would be presented in the Unified Agenda. OSHA sometimes chose to consolidate multiple proposals into several large packages. In other cases OSHA appeared to present a single initiative in several different packages. We used whatever groupings OSHA and the other agencies used to present their revision efforts as our unit of analysis. As the Acting Assistant Secretary noted, some of the agencies’ actions with multiple proposals appeared to both increase and decrease the burden felt by regulated entities. In a few cases, the bulk of the proposals appeared to be either a burden increase or a burden reduction, so we could make a burden change determination for the actions as a whole. However, in 23 of the actions we could not reach an overall conclusion about the net effect of multiple and potentially offsetting proposals on regulatory burden, so we coded each of the actions as “cannot tell.” Therefore, we believe that the report does recognize the complex interrelationships between factors within a single action. Finally, the Acting Assistant Secretary is incorrect in saying that we described OSHA’s efforts to eliminate pages from the CFR as “minor burden reductions.” We used that description for agencies’ CFR revision efforts that clarified the language in the CFR to make it easier to read or understand, or that combined sections in the CFR to make the requirements easier to find but did not change the underlying requirements placed on regulated entities. Although such clarifications and consolidations are clearly desirable, we coded them as “minor burden reductions” because we wanted to differentiate them from other agency actions that appeared to change underlying regulatory requirements and result in substantive reductions in burden. We are sending copies of this report to the Ranking Minority Member of the Senate Governmental Affairs Committee; the Director of OMB; the Secretaries of HUD, Labor, and DOT; and the Administrator of EPA. We will also make copies available to others on request. Major contributors to this report are listed in appendix III. Please contact me on (202) 512-8676 if you or your staff have any questions concerning this report. The objectives of this review were to determine whether (1) agencies’ reported Code of Federal Regulations (CFR) page elimination totals take into account the pages added to the CFR during the same period, (2) agencies’ CFR revision efforts will reduce regulatory burden, and (3) the administration has any mechanism in place for measuring burden reductions as a result of its CFR page elimination and revision initiatives. As the requester specified, we limited the scope of our work on the first two objectives to four major regulatory agencies: the Departments of Housing and Urban Development (HUD) and Transportation (DOT), the Department of Labor’s Occupational Safety and Health Administration (OSHA), and the Environmental Protection Agency (EPA). To address the first objective, we interviewed agency officials responsible for the administration’s CFR page elimination initiative at HUD, DOT, OSHA, and EPA. All of these officials said that their agencies did not track CFR page additions during the initiative. They also said that it would be extremely difficult and time-consuming to count the number of pages that had been added in the years since their initiatives had begun. Working with the agencies and with the requester, we developed a methodology that each agency could use to estimate the number of pages that had been added to the CFR while pages were being eliminated. We obtained the agencies’ page elimination totals as of April 30, 1997, from the Office of Management and Budget’s (OMB) Office of Information and Regulatory Affairs (OIRA). Then we asked the four agencies to identify their major regulatory actions (those that had added five pages or more to the CFR) and to estimate the number of pages that each of those actions had added to their parts of the CFR between the start of their page elimination initiatives and April 30, 1997. In three of the four agencies, the number of pages added was also calculated by comparing the agencies’ CFR page totals near the beginning and end of their page elimination initiatives, calculating the net difference in pages, and using the number of pages deleted to solve for pages added. For example, if an agency had 5,000 pages in the CFR as of July 1, 1995, and 5,100 pages as of July 1, 1996, the net change during that 1-year period was an increase of 100 CFR pages. If the agency said that it had eliminated 200 pages from the CFR during that 1-year period, the number of pages added during the period was 300 pages. Similarly, using both the agencies’ estimates of their page additions and their elimination figures as reported to OIRA for the entire period of the initiative, we calculated the net increase or decrease in each agency’s CFR page totals. To address the second objective, we reviewed descriptions of the four agencies’ actions as part of the administration’s CFR revision initiative and determined whether the actions would reduce the burden imposed on regulated entities. Specifically, we reviewed the actions that were described in the October 1995, April 1996, October 1996, and April 1997 editions of the Unified Agenda of Federal Regulatory and Deregulatory Actions as part of the administration’s “reinventing government” initiative and that involved “revision of text in the CFR to reduce burden or duplication or to streamline requirements.” We obtained a computerized data file of each of these actions in the Unified Agenda from the Regulatory Information Service Center and combined the information into one database, eliminating duplicate actions and retaining the most recent abstract available for each action. We identified 422 such entries in the 4 agencies included in this review—107 for HUD, 183 for DOT, 19 for OSHA, and 113 for EPA. Thirty-one of these entries had no abstract describing the initiative in the Unified Agenda, so we obtained abstracts or proposed or final rule preambles directly from the agencies for each of these actions. We defined “regulated entities” as the organizations that must comply with the regulations’ provisions, including individuals, businesses, state or local governments, or federal agencies (other than the agency that enforced or promulgated the regulation). We defined “regulatory burden” as the impact of a rule on regulated entities, including the direct and indirect cost of compliance; paperwork requirements; negative effects on competitiveness or productivity; penalties for noncompliance; and confusion as a result of unreasonable, inconsistent, hard-to-find, or hard-to-understand regulations. We initially reviewed the abstracts or rule preambles for each action to determine what effect the action would have on the burden felt by regulated entities. However, many of the abstracts did not contain enough information to allow us to assess the effect of the action on regulated entities’ burden. For each such action, we obtained additional information from the agencies and/or related proposed or final rules published in the Federal Register. We matched the Unified Agenda entries with the proposed or final rules by regulation identification number to ensure that only relevant information was included. After reading all of the available information, we coded each of the actions into one of the following five categories: (1) substantive burden reduction—actions that appeared to decrease the burden on regulated entities, such as eliminating paperwork requirements, allowing flexibility in how entities can comply with or implement the rule, lowering compliance costs, or exempting certain organizations from the regulations; (2) minor burden reduction—actions that seemed to make regulatory requirements easier to read or understand or to make them easier to find (e.g., combining similar or related sections of the CFR into one section); (3) burden increase—actions that appeared to increase the burden on regulated entities, such as adding reporting requirements, requiring additional training, requiring certain testing procedures, or expanding the scope of a regulation to new entities; (4) no burden change—actions that did not seem to change the burden on the regulated entity or that primarily affected the promulgating agency, such as eliminating obsolete or duplicative regulations, establishing a committee to study an issue (with no specific proposal identified), updating agency organizational charts and/or telephone numbers, and establishing ethics regulations for employees of the promulgating agency; and (5) cannot tell—actions that had multiple parts which potentially could offset each other or were unclear as to their effect on the regulated entities. To help ensure validity and consistency in our assessments of the potential impact of the 422 actions on regulatory entities, we reviewed each of the actions at least 3 times. First, the abstracts were simultaneously and independently reviewed and coded by two of our staff members who were familiar with crosscutting regulatory issues. The staff members then discussed their independent codes for each of the actions, obtained additional information about the actions if necessary, and ultimately agreed on a single code for each action. These codes and their associated abstracts were then reviewed by members of our staff with expertise in the relevant subject areas: transportation, housing, environmental programs, and occupational safety. Their input was considered in reaching a preliminary conclusion about each action. We gave agency officials an opportunity to review and comment on our assessment of the CFR revision actions. In many cases, the agencies provided additional information in support of a different assessment than the one we had made. When we took this additional information into account, we changed our assessments of several actions. However, the majority of our assessments were not affected by the agencies’ review. To determine whether the administration had any mechanisms in place to measure burden reductions as a result of its regulatory reform initiative, we interviewed OIRA officials. We did not verify the agencies’ CFR page elimination totals or their page addition estimates. However, last year we evaluated EPA’s and DOT’s page elimination claims and concluded that they were generally valid. The agencies’ estimates of the pages added to the CFR do not include all added pages because, in response to agency concerns about the effort it would take to count all pages, we agreed that the agencies could exclude any action that added less than five pages. Also, although we validated our judgments about the possible effect of the proposed changes by using multiple judges and consulting with knowledgeable members of our staff and agency officials, some of our assessments were based on relatively little information. Finally, we did not render a judgment regarding the wisdom of any of the CFR revision actions, only whether they would affect the burden felt by regulated entities. We conducted our work at OMB, HUD, DOT, OSHA, and EPA headquarters in Washington, D.C., between February 1997 and September 1997 in accordance with generally accepted government auditing standards. We made available a draft of this report for comment to the Director of OMB; the Secretaries of HUD, Labor, and DOT; and the Administrator of EPA. Designees of these agency heads provided comments on the report as a whole and, in some cases, provided additional information. Their comments were incorporated into the report accordingly. Curtis Copeland, Assistant Director, Federal Management and Workforce Issues Ellen Wineholt, Evaluator-in-Charge Thomas Beall, Technical Analyst Kevin Dooley, Technical Analyst Kiki Theodoropoulos, Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO updated and expanded its previous review of the Code of Federal Regulations (CFR) page elimination and revision initiative, focusing on whether: (1) agencies' reported page elimination totals took into account any pages added to the CFR during the same period; (2) agencies' CFR revision efforts would reduce regulatory burden; and (3) the administration has any mechanism in place for measuring burden reductions as a result of its CFR page elimination and revision initiatives. GAO limited the scope of its work on the first two objectives to four agencies: the Department of Housing and Urban Development (HUD) and Transportation (DOT), the Department of Labor's Occupational Safety and Health Administration (OSHA), and the Environmental Protection Agency (EPA). GAO noted that: (1) officials in each of the four agencies GAO reviewed said that the page elimination totals that their agencies reported to the Office of Information and Regulatory Affairs (OIRA) did not take into account the pages that their agencies had added to the CFR while the eliminations were taking place; (2) EPA and DOT estimated that they added more pages to the CFR than they removed during their page elimination initiatives; (3) HUD and OSHA, on the other hand, estimated that they deleted more pages than they added; (4) overall, when estimated page additions were counted, the 4 agencies' CFR sections decreased in size by about 926 pages--about 3 percent of the CFR pages at the start of the initiative, or about 17 percent of the amount reported to OIRA; (5) the agencies pointed out that pages are often added to the CFR because of statutory requirements or to clarify requirements placed on regulated entities and that pages are sometimes not eliminated at the request of those entities; (6) GAO's review indicated that about 40 percent of the 422 CFR revision actions in the 4 agencies would substantively reduce the burden felt by regulated entities as a result of such actions as eliminating paperwork requirements and providing compliance flexibility; (7) another 15 percent of the actions appeared to be minor burden reductions in that they seemed to make the regulations easier to find or to understand but would not change the underlying regulatory requirements or scope of applicability; (8) GAO concluded that about 27 percent of the CFR revision actions would have no effect on the burden felt by regulated entities and that about 8 percent could increase regulatory burden; (9) GAO could not determine what effect about 9 percent of the CFR revision actions would have on the regulated entities, either because the actions had multiple parts that potentially could offset each other or because the information available was unclear; (10) OIRA officials said that the administration has no mechanisms in place for measuring burden reductions as a result of the CFR page elimination and revision effort; and (11) however, they believe that the initiative is having a beneficial effect and also pointed out that the CFR page elimination and revision efforts are only part of a larger set of actions the administration is taking to reform the nation's regulatory system.
Social Security provides retirement, disability, and survivor benefits to insured workers and their dependents. Insured workers are eligible for reduced benefits at age 62, and full retirement benefits between age 65 and 67, depending on the worker’s year of birth. Social Security retirement benefits are based on the worker’s age and career earnings, are fully indexed for price inflation after retirement, and replace a relatively higher proportion of wages for career low-wage earners. Social Security’s primary source of revenue is the Old Age, Survivors, and Disability Insurance (OASDI) portion of the payroll tax paid by employers and employees. This Social Security tax is 6.2 percent of earnings up to an established maximum, paid by both employers and employees. One of Social Security’s most fundamental principles is that benefits reflect the earnings on which workers have paid Social Security taxes. Thus, Social Security provides benefits that workers have earned, in part, due to their contributions and those of their employers. At the same time, Social Security helps ensure that its beneficiaries have adequate incomes and do not have to depend on welfare. Toward this end, Social Security’s benefit provisions redistribute income in a variety of ways—from those with higher lifetime earnings to those with lower ones, from those without dependents to those with dependents, from single earners and two-earner couples to one-earner couples, and from those who live shorter lives to those who live longer. These effects result from the program’s focus on helping ensure adequate incomes. Such effects depend, to a great extent, on the universal and compulsory nature of the program. According to the Social Security trustees’ 2007 intermediate (or best estimate) assumptions, Social Security’s cash flow is expected to turn negative in 2017. In addition, all of the accumulated Treasury obligations held by the trust funds are expected to be exhausted by 2041. Social Security’s long-term financing shortfall stems primarily from the fact that people are living longer and having fewer children. As a result, the number of workers paying into the system for each beneficiary has been falling and is projected to decline from 3.3 today to 2.2 by 2030. Reductions in promised benefits and/or increases in program revenues will be needed to restore the long-term solvency and sustainability of the program. About one-fourth of public employees do not pay Social Security taxes on the earnings from their government jobs. Historically, Social Security did not require coverage of government employment because some government employers had their own retirement systems. In addition, there was concern over the question of the federal government’s right to impose a tax on state governments. However, the remaining three-fourths of public employees are now covered by Social Security, as well as virtually all private sector workers. The 1935 Social Security Act mandated coverage for most workers in commerce and industry; at that time, such workers comprised about 60 percent of the workforce. Subsequently, the Congress extended Social Security coverage to most of the excluded groups, including many state and local employees, military personnel, members of Congress, and federal civilian employees hired after January 1, 1984. In 1950, Congress enacted legislation allowing voluntary coverage to state and local government employees not covered by public pension plans, and in 1955, extended voluntary coverage to those already covered by plans as well. Initially, public employers could opt in and out of the Social Security program under these provisions. Since 1983, however, public employers have not been permitted to withdraw from the program once they have opted in and their employees are covered. Also, in 1990, Congress made coverage mandatory for most state and local employees not covered by public pension plans. Nevertheless, the most recent data from SSA indicates that in 2005, about 6.8 million state and local government employees were still not covered by Social Security. Coverage varies widely across states. In some states, such as New York and Vermont, virtually all government employees are covered; in other states, such as Massachusetts and Ohio, less than 5 percent of government employees are covered. Seven states—California, Colorado, Illinois, Louisiana, Massachusetts, Ohio, and Texas—account for nearly 70 percent of the noncovered state and local government payroll. In addition, SSA estimates that about half a million federal government employees are not covered. These are civilian employees hired before January 1, 1984, who continue to be covered under the Civil Service Retirement System. Most full-time public employees participate in defined benefit pension plans. Minimum retirement ages for full benefits vary, but many state and local employees can retire with full benefits at age 55 with 30 years of service. Retirement benefits also vary, but they are generally based on a specified benefit rate for each year of service and the member’s final average salary over a specified time period, usually 3 years. For example, plans with a 2 percent rate replace 60 percent of a member’s final average salary after 30 years of service. State and local government workers also generally have a survivor annuity option and disability benefits, and many receive cost-of-living increases after retirement. In addition, in recent years, the number of defined contribution plans—such as 401(k) plans and the Thrift Savings Plan for federal employees—has been growing. There has been little movement toward adopting defined contribution plans as the primary pension plans for state and local workers, but such plans have become fairly universally available as supplemental voluntary tax-deferred savings plans. Even though noncovered employees may have many years of earnings on which they do not pay Social Security taxes, they can still be eligible for Social Security benefits based on their spouses’ or their own earnings in covered employment. According to SSA, nearly all noncovered state and local employees become entitled to Social Security as spouses, dependents, or workers. However, their noncovered status for the bulk of their earnings complicates the program’s ability to target benefits in the ways it is intended to do. To address the fairness issues that arise with noncovered public employees, the Congress has enacted two provisions: (1) the Government Pension Offset (GPO) regarding spouse and survivor benefits, and (2) the Windfall Elimination Provision (WEP) regarding retired worker benefits. Both provisions apply only to those beneficiaries who receive pensions from noncovered employment. However, the provisions have been difficult to administer because they depend on having complete and accurate information on noncovered earnings and pensions—information that has proven difficult to get. Also, the provisions are a source of confusion and frustration for public employees and retirees. Under the GPO provision, enacted in 1977, SSA must reduce Social Security benefits for those receiving noncovered government pensions when their entitlement to Social Security is based on another person’s (usually a spouse’s) Social Security coverage. Their Social Security benefits are to be reduced by two-thirds of the amount of their government pension. Spouse and survivor benefits were intended to provide some Social Security protection to spouses with limited working careers. The GPO provision reduces spouse and survivor benefits to persons who do not meet this limited working career criterion because they worked long enough in noncovered employment to earn their own pension. Under the WEP, enacted in 1983, SSA must use a modified formula to reduce the Social Security benefits people receive when they have had a lengthy career in noncovered employment. The Congress was concerned that the design of the Social Security benefit formula provided unintended windfall benefits to workers who had spent most of their careers in noncovered employment, as the formula replaces a relatively higher proportion of wages for low earners than for high earners, and those with lengthy careers in noncovered employment appear on SSA’s records as low earners. To administer the GPO and WEP, SSA needs to know whether beneficiaries receive pensions from noncovered employment. However, SSA cannot apply these provisions effectively and fairly because it lacks this information. In a report we issued in 1998, we recommended that SSA perform additional computer matches with the Office of Personnel Management to get noncovered pension data for federal retirees. In response to our recommendation, SSA performed the first such match in 1999 and planned to continue to conduct the matches on a recurring basis. We estimated that correcting the errors identified through such matches will generate hundreds of millions of dollars in savings. However, SSA still lacks the information it needs for state and local governments, and therefore, it cannot apply the GPO and the WEP for state and local government employees to the same extent it can for federal employees. The resulting disparity in the application of these two provisions is yet another source of unfairness in the calculation of Social Security benefits for public employees. In our testimony before the Subcommittee on Social Security, House Committee on Ways and Means, in May 2003 and again in June 2005, we recommended that the Congress consider giving the Internal Revenue Service (IRS) the authority to collect the information that SSA needs on government pension income, a task that could perhaps be accomplished through a simple modification to a single form. Earlier versions of the Social Security Protection Act of 2004 contained such a provision, but this provision was not included when the final version of the bill was approved and signed into law. As long as the GPO and WEP remain in effect, we continue to believe that the IRS should be given the authority to collect the information that SSA needs on government pension income to administer these provisions accurately and fairly. The GPO and the WEP have been a continuing source of confusion and frustration for the more than 7.3 million government workers affected. Critics of the measures contend that the provisions are basically inaccurate and often unfair. For example, critics of the GPO contend that the two-thirds reduction is imprecise and could be based on a more rigorous formula. According to a recent analysis conducted by the Congressional Research Service, the GPO formula slightly overestimates the reduction that some individuals (particularly higher earners) would otherwise receive if they worked in Social Security-covered employment, and greatly underestimates the reduction that others (particularly lower earners) would receive. In the case of the WEP, opponents argue that the formula adjustment is an arbitrary and inaccurate way to estimate the value of the windfall and causes a relatively larger benefit reduction for lower-paid workers. In recent years, various proposals to change Social Security have been offered that would affect public employees. Some proposals specifically address the GPO and the WEP and would either revise or eliminate them. Other proposals would make Social Security coverage mandatory for all state and local government employees. A variety of proposals have been offered to either revise or eliminate the GPO or the WEP. While we have not studied these proposals in detail, I would like to offer a few observations to keep in mind as you consider them. First, repealing these provisions would be costly in an environment where the Social Security trust funds already face long-term solvency issues. According to current SSA estimates, eliminating the GPO entirely would cost $41.7 billion over 10 years and increase the long-range deficit by about 3 percent. Similarly, SSA estimates that eliminating the WEP would cost $40.1 billion, also increasing Social Security’s long-range deficit by 3 percent. Second, in thinking about the fairness of the provisions and whether or not to repeal them, it is important to consider both the affected public employees and all other workers and beneficiaries who pay Social Security taxes. For example, SSA has described the GPO as a way to treat spouses with noncovered pensions in a manner similar to how it treats dually entitled spouses, who qualify for Social Security benefits on both their own and their spouses’ work records. In such cases, spouses may not receive both the benefits earned as a worker and the full spousal benefit; rather, they receive the higher amount of the two. If the GPO were eliminated or reduced for spouses who had paid little or no Social Security taxes on their lifetime earnings, it might be reasonable to ask whether the same should be done for dually entitled spouses who have paid Social Security on all their earnings. Otherwise, such couples would be worse off than couples who were no longer subject to the GPO. And far more spouses are subject to the dual entitlement offset than to the GPO; as a result, the costs of eliminating the dual entitlement offset would be commensurately greater. Making coverage mandatory for all state and local government employees has been proposed to help address the program’s financing problems. According to Social Security actuaries’ 2005 estimate, requiring all newly hired state and local government employees to begin paying into the system would reduce the 75-year actuarial deficit by about 11 percent. Expanding coverage to currently noncovered workers increases revenues relatively quickly and improves solvency for some time, since most of the newly covered workers would not receive benefits for many years. In the long run, benefit payments would increase as the newly covered workers started to collect benefits; however, overall, this change would represent a small net gain for solvency. In addition to considering solvency effects, the inclusion of mandatory coverage in a comprehensive reform package would need to be grounded in other considerations. In recommending that mandatory coverage be included in reform proposals, the 1994-1996 Social Security Advisory Council stated that mandatory coverage is basically “an issue of fairness.” Its report noted that “an effective Social Security program helps to reduce public costs for relief and assistance, which, in turn, means lower general taxes. There is an element of unfairness in a situation where practically all contribute to Social Security, while a few benefit both directly and indirectly but are excused from contributing to the program.” Another advantage of mandatory Social Security coverage is that it could improve benefits for the affected workers, but it could also increase pension costs for state and local governments. The effects on public employees and employers would depend on how states and localities changed their noncovered pension plans in response to mandatory coverage. For example, by gaining coverage, workers would benefit from Social Security’s automatic inflation protection, full benefit portability, and dependent benefits, which are not available in many public pension plans. Also, the GPO and the WEP would no longer apply and so could be phased out over time. With mandatory coverage, the costs for state and local governments would likely increase, adding to the fiscal challenges that already lie ahead for many. If states and localities provided pension benefits that are similar to the benefits provided employees already covered by Social Security, studies indicate that their retirement costs could increase by as much as 11 percent of payroll. Alternatively, states and localities that wanted to maintain level spending for retirement under mandatory coverage would likely need to reduce some pension benefits. Thus, while workers’ benefits may be enhanced in some ways by gaining Social Security, their total contribution rate may increase, and the benefits they receive under their previously noncovered pension plans may be reduced. Additionally, states and localities could require several years to design, legislate, and implement changes to current pension plans, and mandating Social Security coverage for state and local employees could elicit constitutional challenges. Also, mandatory coverage would not immediately address the issues and concerns regarding the GPO and the WEP, as these provisions would continue to apply to existing employees and beneficiaries for many years to come before eventually becoming obsolete. Finally, state and local governments would have to administer two different systems–one for existing noncovered employees and another for newly covered employees—until the provisions no longer applied to anyone or were repealed. In conclusion, there are no easy answers to the difficulties of equalizing Social Security’s treatment of covered and noncovered workers. Any reductions in the GPO or the WEP would ultimately come at the expense of other Social Security beneficiaries and taxpayers. Mandating universal coverage would promise eventual elimination of the GPO and the WEP, but at potentially significant cost to affected state and local governments, and even so, the GPO and the WEP would continue to apply for many years to come unless they were repealed. As long as the GPO and the WEP remain in effect, it will be important to administer the provisions as effectively and equitably as possible. SSA has found it difficult to administer these provisions because they depend on complete and accurate reporting of government pension income, which is not currently available. The resulting disparity in the application of these two provisions is a continuing source of unfairness for Social Security beneficiaries, both covered and noncovered. GAO has previously recommended that the Congress consider giving IRS the authority to collect the information that SSA needs on government pension income to administer the GPO and WEP provisions accurately and fairly. GAO continues to believe that this important issue warrants further consideration by the Congress. Mr. Chairman, this concludes my statement, I would be happy to respond to any questions you or other members of the subcommittee may have. For further information regarding this testimony, please contact Barbara D. Bovbjerg, Director, Education, Workforce, and Income Security Issues at (202) 512-7215 or bovbjergb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Michael Collins and Margie Shields. State and Local Government Retiree Benefits: Current Status of Benefit Structures, Protections, and Fiscal Outlook for Funding Future Costs. GAO-07-1156. Washington, D.C.: September 24, 2007. State and Local Governments: Persistent Fiscal Challenges Will Likely Emerge within the Next Decade. GAO-07-1080SP. Washington, D.C.: July 18, 2007. Social Security: Coverage of Public Employees and Implications for Reform. GAO-05-786T. Washington, D.C.: June 9, 2005. Social Security Reform: Answers to Key Questions. GAO-05-193SP. Washington, D.C.: May 2005. Social Security: Issues Relating to Noncoverage of Public Employees. GAO-03-710T. Washington, D.C.: May 1, 2003. Social Security: Congress Should Consider Revising the Government Pension Offset “Loophole.” GAO-03-498T. Washington, D.C.: Feb. 27, 2003. Social Security Administration: Revision to the Government Pension Offset Exemption Should Be Considered. GAO-02-950. Washington, D.C.: Aug. 15, 2002. Social Security Reform: Experience of the Alternate Plans in Texas. GAO/HEHS-99-31, Washington, D.C.: Feb. 26, 1999. Social Security: Implications of Extending Mandatory Coverage to State and Local Employees. GAO/HEHS-98-196. Washington, D.C.: Aug. 18, 1998. Social Security: Better Payment Controls for Benefit Reduction Provisions Could Save Millions. GAO/HEHS-98-76. Washington, D.C.: April 30, 1998. Federal Workforce: Effects of Public Pension Offset on Social Security Benefits of Federal Retirees. GAO/GGD-88-73. Washington, D.C.: April 27, 1988. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Social Security covers about 96 percent of all U.S. workers; the vast majority of the remaining 4 percent are public employees. Although these noncovered workers do not pay Social Security taxes on their government earnings, they may still be eligible for Social Security benefits through their spouses' or their own earnings from other covered employment. Social Security has provisions--the Government Pension Offset (GPO) and the Windfall Elimination Provision (WEP)--that attempt to take noncovered employment into account when calculating the Social Security benefits for public employees. However, these provisions have been difficult to administer and critics contend that the provisions themselves are often unfair. The Committee asked GAO to discuss the issues regarding the coverage of public employees under Social Security, the provisions to take noncovered employment into account, and the proposals to modify those provisions. There are no easy answers to the difficulties of equalizing Social Security's treatment of covered workers and noncovered public employees. About one-fourth of public employees--primarily state and local government workers--are not covered by Social Security and do not pay Social Security taxes on their government earnings. Nevertheless, these workers may still be eligible for Social Security benefits through their spouses' or their own earnings from other covered employment. To address concerns with how noncovered workers are treated compared with covered workers, Social Security has provisions in place to take noncovered employment into account and reduce Social Security benefits for public employees. To be administered fairly and accurately, both these provisions require complete and accurate reporting of government pension income, which is not currently available. The resulting disparity in the application of the provisions is a continuing source of confusion and frustration for affected workers. Thus, various changes that would affect the GPO and WEP provisions have been proposed, such as: eliminate the GPO and WEP provisions. This would simplify administration and avoid concerns about unfair treatment among public employees. However, any reductions in the GPO or the WEP would widen Social Security's financial gap and would raise concerns about unfair treatment of public employees compared with other workers. Extend mandatory coverage. If all newly hired state and local government employees who are not currently covered were to become covered, the need for the GPO and WEP could be phased out over time. In 2005, Social Security actuaries estimated that mandating coverage for these employees would reduce the 75-year actuarial deficit by about 11 percent. While mandatory coverage could enhance retirement benefits for the affected workers, it could also result in significant costs to the affected state and local governments. As long as the GPO and the WEP remain in effect, it will be important to administer the provisions effectively and equitably based on accurate and complete information on both covered and noncovered employment.
As I observed when I first testified on the DOD proposal in April, many of the basic principles underlying DOD’s civilian human capital proposals have merit and deserve the serious consideration they are receiving. Secretary Rumsfeld and the rest of DOD’s leadership are clearly committed to transforming how DOD does business. Based on our experience, while DOD’s leadership has the intent and the ability to transform DOD, the needed institutional infrastructure is not in place within a vast majority of DOD organizations. Our work looking at DOD’s strategic human capital planning efforts and looking across the federal government at the use of human capital flexibilities and related human capital efforts underscores the critical steps that DOD needs to take to properly develop and effectively implement any new personnel authorities. In the absence of the right institutional infrastructure, granting additional human capital authorities will provide little advantage and could actually end up doing damage if the authorities are not implemented properly. The following provides some observations on key provisions of the proposed National Security Personnel System Act in relation to the House version of the National Defense Authorization Act for Fiscal Year 2004. First, I offer some comments on the overall design for a new personnel system at DOD. Second, I provide comments on selected aspects of the proposed system. The House version of DOD’s authorization bill would allow the Secretary of Defense to develop regulations with the Director of OPM to establish a human resources management system for DOD. The Secretary of Defense could waive the requirement for the joint issuance of regulations if, in the Secretary’s judgment and subject to the decision of the President, it is “essential to the national security”—which was not defined in the proposed bill. As an improvement, the proposed National Security Personnel System Act also requires that the new personnel system be jointly developed by the Secretary of Defense and the Director of OPM, but does not allow the joint issuance requirement to be waived. This approach is consistent with the one the Congress took in creating the Department of Homeland Security. The proposed National Security Personnel System Act requires the Secretary of Defense to phase in the implementation of NSPS beginning in fiscal year 2004. Specifically, the new personnel authorities could be implemented for a maximum of 120,000 of DOD’s civilian employees in fiscal year 2004, up to 240,000 employees in fiscal year 2005, and more than 240,000 employees in a fiscal year after fiscal year 2005, if the Secretary of Defense determines that, in accordance with the bill’s requirement that the Secretary and the Director of OPM jointly develop regulations for DOD’s new human resources management system, the Department has in place a performance management system and pay formula that meets criteria specified in the bill. We strongly support a phased approach to implementing major management reforms, whether with the human capital reforms at DOD or with change management initiatives at other agencies or across the government. We suggest that OPM, in fulfilling its role under this section of the bill, certify that DOD has a modern, effective, credible, and, as appropriate, validated performance management system with adequate safeguards, including reasonable transparency and appropriate accountability mechanisms, in place to support performance-based pay and related personnel decisions. The proposed National Security Personnel System Act states that the Secretary of Defense may establish an employee appeals process that is fair and ensures due process protections for employees. The Secretary of Defense is required to consult with the Merit Systems Protection Board (MSPB) before issuing any regulations in this area. The DOD appeals process must be based on legal standards consistent with merit system principles and may override legal standards and precedents previously applied by MSPB and the courts in cases related to employee conduct and performance that fails to meet expectations. The bill would allow appeal of any decision adversely affecting an employee and raising a substantial question of law or fact under this process to the Merit Systems Protection Board under specific standards of review, and the Board’s decision could be subject to judicial review, as is the case with other MSPB decisions. This proposal affords the employee review by an independent body and the opportunity for judicial review along the lines that we have been suggesting. The proposed National Security Personnel System Act does not include an evaluation or reporting requirement from DOD on the implementation of its human capital reforms, although DOD has stated that it will continue its evaluation of the science and technology reinvention laboratory demonstration projects when they are integrated under a single human capital framework. We believe an evaluation and reporting requirement would facilitate congressional oversight of NSPS, allow for any midcourse corrections in its implementation, and serve as a tool for documenting best practices and sharing lessons learned with employees, stakeholders, other federal agencies, and the public. Specifically, the Congress should consider requiring that DOD fully track and periodically report on the implementation and results of its new human capital program. Such reporting could be on a specified timetable with sunset provisions. These required evaluations could be broadly modeled on the evaluation requirements of OPM’s personnel demonstration program. Under the demonstration project authority, agencies must evaluate and periodically report on results, implementation of the demonstration project, cost and benefits, impacts on veterans and other Equal Employment Opportunity groups, adherence to merit principles, and the extent to which the lessons from the project can be applied elsewhere, including governmentwide. The reports could be done in consultation with or subject to review of OPM. There is widespread understanding that the basic approach to federal pay is outdated and that we need to move to a more market- and performance- based approach. Doing so will be essential if we expect to maximize the performance and assure the accountability of the federal government for the benefit of the American people. DOD has said that broad banded performance management and pay for performance systems will be the cornerstone of its new system. Reasonable people can and will debate and disagree about the merits of individual reform proposals. However, all should be able to agree that a modern, reliable, effective, and validated performance management system with adequate safeguards, including reasonable transparency and appropriate accountability mechanisms, must serve as the fundamental underpinning of any successful results-oriented pay reform. We are pleased that both the House version of DOD’s fiscal year 2004 authorization bill and the proposed National Security Personnel System Act contain statutory safeguards and standards along the lines that we have been suggesting to help ensure that DOD’s pay for performance efforts are fair to employees and improve both individual and organizational performance. The statutory standards described in the National Security Personnel System Act proposal are intended to help ensure a fair, credible, and equitable system that results in meaningful distinctions in individual employee performance; employee involvement in the design and implementation of the system; and effective transparency and accountability measures, including appropriate independent reasonableness reviews, internal grievance procedures, internal assessments, and employee surveys. In our reviews of agencies’ performance management systems—as in our own experience with designing and implementing performance-based pay reform for ourselves at GAO—we have found that these safeguards are key to maximizing the chances of success and minimizing the risk of failure and abuse. The proposed National Security Personnel System Act also takes the essential first step in requiring DOD to link the performance management system to the agency’s strategic plan. Building on this, we suggest that DOD should also be required to link its performance management system to program and performance goals and desired outcomes. Linking the performance management system to related goals and desired outcomes helps the organization ensure that its efforts are properly aligned and reinforces the line of sight between individual performance and organizational success so that an individual can see how her/his daily responsibilities contribute to results and outcomes. The proposed National Security Personnel System Act includes a detailed list of elements that regulations for DOD’s broad band pay program must cover. These elements appear to be taken from DOD’s experience with its civilian acquisition workforce personnel demonstration project as well as the plan, as described in an April 2, 2003 Federal Register notice to integrate all of DOD’s current science and technology reinvention laboratory demonstration projects under a single human capital framework. Many of the required elements in the proposed National Security Personnel System Act are entirely appropriate, such as a communication and feedback requirement, a review process, and a process for addressing performance that fails to meet expectations. However, other required elements, such as “performance scores”, appear to imply a particular approach to performance management that, going forward, may or may not be appropriate for DOD, and therefore may have the unintended consequence of reducing DOD’s flexibility to make adjustments. Congress has an important and continuing role to play in the design and implementation of the federal government’s personnel policies and procedures. Congress should consider how best to balance its responsibilities with agencies’ needs for the flexibility to respond to changing circumstances. Finally, under the proposed act, for fiscal years 2004 through 2008, the overall amount allocated for compensation for civilian employees of an organizational or functional unit of DOD that is included in NSPS shall not be less than the amount of civilian pay that would have been allocated to such compensation under the General Schedule. After fiscal year 2008, DOD’s regulations are to provide a formula for calculating an overall amount, which is to ensure that employees in NSPS are not disadvantaged in terms of the overall amount of pay available as a result of their conversion into NSPS while providing DOD with flexibility to accommodate changes in the function of the organization, the mix of employees performing those functions, and other changes that might affect pay levels. Congress has had a longstanding and legitimate interest in federal employee pay and compensation policies and, as a result, there are provisions consistent with that interest in the National Security Personnel System Act. However, as currently constructed, the proposed bill may have the unintended consequence of creating disincentives, until fiscal year 2009, for DOD to ensure that it has the most effective and efficient organizational structure in place. This is because, based on our understanding of the bill’s language, if DOD were to reorganize, outsource, or undertake other major change initiatives through 2008 in an organizational or functional unit that is part of NSPS, DOD may still be required to allocate an overall amount for compensation to the reorganized unit based on the number and mix of employees in place prior to conversion into NSPS. In other words, if priorities shift and DOD needs to downsize a unit in NSPS significantly, it may still be required that the downsized unit’s overall compensation level remain the same as it would have been in the absence of the downsizing. While pay protections during a transition period are generally appropriate to build employee support for the changes, we believe that, should the Congress decide to require overall organizational compensation protection, it should build in additional flexibilities for DOD to make adjustments in response to changes in the size of organizations, mix of employees, and other relevant factors. The current allowable total annual compensation limit for senior executives would be increased up to the Vice President's total annual compensation (base pay, locality pay, and awards and bonuses) in the proposed National Security Personnel System Act and the House National Defense Authorization Act for Fiscal Year 2004. In addition, the highest rate of (base) pay for senior executives would be increased in the House version of the authorization bill. The Homeland Security Act provided that OPM, with the concurrence of the Office of Management and Budget, certify that agencies have performance appraisal systems that, as designed and applied, make meaningful distinctions based on relative performance before an agency could increase its total annual compensation limit for senior executives. While the House version of DOD’s fiscal year 2004 authorization bill would still require an OPM certification process to increase the highest rate of pay for senior executives, neither the proposed National Security Personnel System Act nor the House bill would require such a certification for increasing the total annual compensation limit for senior executives. To be generally consistent with the Homeland Security Act, we believe that the Congress should require that OPM certify that the DOD senior executive service (SES) performance management system makes meaningful distinctions in performance and employs the other practices used by leading organizations to develop effective performance management systems, including establishing a clear, direct connection between (1) SES performance ratings and rewards and (2) the degree to which the organization achieved its goals. DOD would be required to receive the OPM certification before it could increase the total annual compensation limit and/or the highest rate of pay for its senior executives. The National Security Personnel System Act contains a number of provisions designed to give DOD flexibility to help obtain key critical talent. It allows DOD greater flexibility to (1) hire experts and pay them special rates for temporary periods up to six years, and (2) define benefits for certain specialized overseas employees. Specifically, the Secretary would have the authority to establish a program to attract highly qualified experts in needed occupations with the flexibility to establish the rate of pay, eligibility for additional payments, and terms of the appointment. These authorities give DOD considerable flexibility to obtain and compensate individuals and exempt them from several provisions of current law. Consistent with our earlier suggestions, the bill would limit the number of experts employed at any one time to 300. The Congress should also consider requiring that these provisions only be used to fill critically needed skills identified in a DOD strategic human capital plan, and that DOD report on the use of the authorities under these sections periodically. As I mentioned at the outset of my statement today, the consideration of human capital reforms for DOD naturally suggests opportunities for governmentwide reform as well. The following provides some suggestions in that regard. We believe that the Congress should consider providing governmentwide authority to implement broad banding, other pay for performance systems, and other personnel authorities whereby whole agencies are allowed to use additional authorities after OPM has certified that they have the institutional infrastructures in place to make effective and fair use of those authorities. To obtain additional authority, an agency should be required to have an OPM-approved human capital plan that is fully integrated with the agency’s strategic plan. These plans need to describe the agency’s critical human capital needs and how the new provisions will be used to address the critical needs. The plan should also identify the safeguards or other measures that will be applied to ensure that the authorities are carried out fairly and in a manner consistent with merit system principles and other national goals. Furthermore, the Congress should establish statutory principles for the standards that an agency must have in place before OPM can grant additional pay flexibilities. The standards for DOD’s performance management system contained in the National Security Personnel System Act are the appropriate place to start. An agency would have to demonstrate, and OPM would have to certify, that a modern, effective, credible, and, as appropriate, validated performance management system with adequate safeguards, including reasonable transparency and appropriate accountability mechanisms, is in place to support more performance-based pay and related personnel decisions before the agency could put the new system in operation. OPM should be required to act on any individual certifications within prescribed time frames (e.g., 30–60 days). Consistent with our suggestion to have DOD evaluate and report on its efforts, agencies should also be required to evaluate the use of any new pay or other human capital authorities periodically. Such evaluations, in consultation with or subject to review of OPM, could be broadly modeled on the evaluation requirements of OPM’s personnel demonstration program. Additional efforts should be undertaken to move the SES to an approach where pay and rewards are more closely tied to performance. This is consistent with the proposed Senior Executive Service Reform Act of 2003. Any effort to link pay to performance presupposes that effective, results-oriented strategic and annual performance planning and reporting systems are in place in an agency. That is, agencies must have a clear understanding of the program results to be achieved and the progress that is being made toward those intended results if they are to link pay to performance. The SES needs to take the lead in matters related to pay for performance. We believe it would be highly desirable for the Congress to establish a governmentwide fund where agencies, based on a sound business case, could apply to OPM for funds to be used to modernize their performance management systems and ensure that those systems have adequate safeguards to prevent abuse. Too often, agencies lack the performance management systems needed to effectively and fairly make pay and other personnel decisions. The basic idea of a governmentwide fund would be to provide for targeted investments needed to prepare agencies to use their performance management systems as strategic tools to achieve organizational results and drive cultural change. Building such systems and safeguards will likely require making targeted investments in agencies’ human capital programs, as our own experience has shown. (If successful, this approach to targeted investments could be expanded to foster and support agencies’ related transformation efforts, including other aspects of the High Performing Organization concept recommended by the Commercial Activities Panel.) Finally, we also believe that the Congress should enact additional targeted and governmentwide human capital reforms for which there is a reasonable degree of consensus. Many of the provisions in the proposed Federal Workforce Flexibility Act of 2003 and the governmentwide human capital provisions of the House version of DOD’s fiscal year 2004 authorization bill fall into this category. Since we designated strategic human capital management as a governmentwide high-risk area in January 2001, the Congress, the administration, and agencies have taken steps to address the federal government’s human capital shortfalls. In a number of statements before the Congress over the last 2 years, I have urged the government to seize on the current momentum for change and enact lasting improvements. Significant progress has been—and is being—made in addressing the federal government’s pressing human capital challenges. But experience has shown that in making major changes in the cultures of organizations, how it is done, when it is done, and the basis on which it is done can make all the difference in whether we are ultimately successful. DOD and other agency-specific human capital reforms should be enacted to the extent that the problems being addressed and the solutions offered are specific to particular agencies. A governmentwide approach should be used to address certain flexibilities that have broad-based application and serious potential implications for the civil service system, in general, and OPM, in particular. This approach will help to accelerate needed human capital reform in DOD and throughout the rest of the federal government. Chairman Collins and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have. For further information about this statement, please contact Derek B. Stewart, Director, Defense Capabilities and Management, on (202) 512- 5140 or at stewartd@gao.gov. For further information on governmentwide human capital issues, please contact J. Christopher Mihm, Director, Strategic Issues, on (202) 512-6806 or at mihmj@gao.gov. Major contributors to this testimony included William Doherty, Bruce Goddard, Hilary Murrish, Lisa Shames, Edward H. Stephenson, Martha Tracy, and Michael Volpe. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
People are at the heart of an organization's ability to perform its mission. Yet a key challenge for the Department of Defense (DOD), as for many federal agencies, is to strategically manage its human capital. DOD's proposed National Security Personnel System would provide for wide-ranging changes in DOD's civilian personnel pay and performance management and other human capital areas. Given the massive size of DOD, the proposal has important precedent-setting implications for federal human capital management. This testimony provides GAO's observations on DOD human capital reform proposals and the need for governmentwide reform. GAO strongly supports the need for government transformation and the concept of modernizing federal human capital policies both within DOD and for the federal government at large. The federal personnel system is clearly broken in critical respects--designed for a time and workforce of an earlier era and not able to meet the needs and challenges of today's rapidly changing and knowledge-based environment. The human capital authorities being considered for DOD have far-reaching implications for the way DOD is managed as well as significant precedent-setting implications for the rest of the federal government. GAO is pleased that as the Congress has reviewed DOD's legislative proposal it has added a number of important safeguards, including many along the lines GAO has been suggesting, that will help DOD maximize its chances of success in addressing its human capital challenges and minimize the risk of failure. More generally, GAO believes that agency-specific human capital reforms should be enacted to the extent that the problems being addressed and the solutions offered are specific to a particular agency (e.g., military personnel reforms for DOD). Several of the proposed DOD reforms meet this test. In GAO's view, the relevant sections of the House's version of the National Defense Authorization Act for Fiscal Year 2004 and the proposal that is being considered as part of this hearing contain a number of important improvements over the initial DOD legislative proposal. Moving forward, GAO believes it would be preferable to employ a governmentwide approach to address human capital issues and the need for certain flexibilities that have broad-based application and serious potential implications for the civil service system, in general, and the Office of Personnel Management, in particular. GAO believes that several of the reforms that DOD is proposing fall into this category (e.g., broad banding, pay for performance, re-employment and pension offset waivers). In these situations, GAO believes it would be both prudent and preferable for the Congress to provide such authorities governmentwide and ensure that appropriate performance management systems and safeguards are in place before the new authorities are implemented by the respective agency. Importantly, employing this approach is not intended to delay action on DOD's or any other individual agency's efforts, but rather to accelerate needed human capital reform throughout the federal government in a manner that ensures reasonable consistency on key principles within the overall civilian workforce. This approach also would help to maintain a level playing field among federal agencies in competing for talent and would help avoid further fragmentation within the civil service.
The federal government’s information resources and technology management structure has its foundation in six laws: the Federal Records Act, the Privacy Act of 1974, the Computer Security Act of 1987, the Paperwork Reduction Act of 1995,the Clinger-Cohen Act of 1996, and the Government Paperwork Elimination Act of 1998. Taken together, these laws largely lay out the information resources and technology management responsibilities of the Office of Management and Budget (OMB), federal agencies, and other entities, such as the National Institute of Standards and Technology. In general, under the government’s current legislative framework, OMB is responsible for providing direction on governmentwide information resources and technology management and overseeing agency activities in these areas, including analyzing major agency information technology investments. Among OMB’s responsibilities are ensuring agency integration of information resources management plans, program plans, and budgets for acquisition and use of information technology and the efficiency and effectiveness of interagency information technology initiatives; developing, as part of the budget process, a mechanism for analyzing, tracking, and evaluating the risks and results of all major capital investments made by an executive agency for information systems; directing and overseeing implementation of policy, principles, standards, and guidelines for the dissemination of and access to public information; encouraging agency heads to develop and use best practices in reviewing proposed agency information collections to minimize information collection burdens and maximize information utility and benefit; and developing and overseeing implementation of privacy and security policies, principles, standards, and guidelines. Agencies, in turn, are accountable for the effective and efficient development, acquisition, and use of information technology in their organizations. For example, the Paperwork Reduction Act of 1995 and the Clinger-Cohen Act of 1996 require agency heads, acting through agency CIOs, to better link their information technology planning and investment decisions to program missions and goals; develop and implement a sound information technology architecture; implement and enforce information technology management policies, procedures, standards, and guidelines; establish policies and procedures for ensuring that information technology systems provide reliable, consistent, and timely financial or program performance data; and implement and enforce applicable policies, procedures, standards, and guidelines on privacy, security, disclosure, and information sharing. Another important organization in federal information resources and technology management—the CIO Council—was established by the President in July 1996. Specifically, Executive Order 13011 established the CIO Council as the principal interagency forum for improving agency practices on such matters as the design, modernization, use, sharing, and performance of agency information resources. The Council, chaired by OMB’s Deputy Director for Management with a Vice Chair selected from among its members, is tasked with (1) developing recommendations for overall federal information technology management policy, procedures, and standards, (2) sharing experiences, ideas, and promising practices, (3) identifying opportunities, making recommendations for, and sponsoring cooperation in using information resources, (4) assessing and addressing workforce issues, (5) making recommendations and providing advice to appropriate executive agencies and organizations, and (6) seeking the views of various organizations. Because it is essentially an advisory body, the CIO Council must rely on OMB’s support to see that its recommendations are implemented through federal information management policies, procedures, and standards. With respect to Council resources, according to its charter, OMB and the General Services Administration are to provide support and assistance, which can be augmented by other Council members as necessary. CIOs or equivalent positions exist at the state level and in other countries, although no single preferred model has emerged. The specific roles, responsibilities, and authorities assigned to the CIO or CIO-type position vary, reflecting the needs and priorities of the particular government. This is consistent with research presented in our ExecutiveGuide:Maximizing theSuccessofChiefInformationOfficers—LearningfromLeading Organizations,which points out that there is no one right way to establish a CIO position and that leading organizations are careful to ensure that information management leadership positions are appropriately defined and implemented to meet their unique business needs. Regardless of the differences in approach, the success of a CIO will typically rest on the application of certain fundamental principles. While our executive guide was specifically intended to help individual federal agencies maximize the success of their CIOs, several of the principles outlined in the guide also apply to the establishment of a governmentwide CIO. In particular, our research of leading organizations demonstrated that it is important for the organization to employ enterprisewide leaders who embrace the critical role of information technology and reach agreement on the CIO’s leadership role. Moreover, the CIO must possess sufficient stature within the organization to influence the planning process. We have not evaluated the effectiveness of state and foreign government CIOs or equivalent positions; however, these positions appear to apply some of these same principles. With respect to the states, according to the National Association of State Information Resource Executives, the vast majority have senior executives with statewide authority for IT. State CIOs are usually in charge of developing statewide IT plans and approving statewide technical IT standards, budgets, personnel classifications, salaries, and resource acquisitions although the CIO’s authority depends on the specific needs and priorities of the governors. Many state CIOs report directly to the state’s governor with the trend moving in that direction. In some cases, the CIO is guided by an IT advisory board. As the president of the National Association of State Information Resource Executives noted in prior testimony before this Subcommittee, “IT is how business is delivered in government; therefore, the CIO must be a party to the highest level of business decisions . . . needs to inspire the leaders to dedicate political capital to the IT agenda.” National governments in other countries have also established a central information technology coordinating authority and, like the states, have used different implementation approaches in doing so. Preliminary results of a recent survey conducted by the International Council for Information Technology in Government Administration indicate that 8 of 11 countries surveyed have a governmentwide CIO, although the structure, roles, and responsibilities varied. Let me briefly describe the approaches employed by three foreign governments to illustrate this variety. Australia’s Department of Communications, Information Technology and the Arts has responsibility for, among other things, (1) providing strategic advice and support to the government for moving Australia ahead in the information economy and (2) developing policies and procedures and helping to coordinate crosscutting efforts toward e-government. The United Kingdom’s Office of the E-Envoy acts in a capacity analogous to a “national government” CIO in that it works to coordinate activities across government and with public, private, and international groups to (1) develop a legal, regulatory and fiscal environment that facilitates e- commerce, (2) help individuals and businesses take full advantage of the opportunities provided by information and communications technologies, (3) ensure that the government of the United Kingdom applies global best practices in its use of information and communications technologies, and (4) ensure that government and business decisions are informed by reliable and accurate e-commerce monitoring and analysis. Canada’s Office of the CIO is contained within the Treasury Board Secretariat, a crosscutting organization whose mission is to manage the government’s human, financial, information, and technology resources. The CIO is responsible for determining and implementing a strategy that will accomplish governmentwide IT goals. Moreover, the CIO is to (1) provide leadership, coordination and broad direction in the use of IT; (2) facilitate enterprisewide solutions to crosscutting IT issues; and (3) serve as technology strategist and expert adviser to Treasury Board Ministers and senior officials across government. The CIO also develops a Strategic Directions document that focuses on the management of critical IT, information management, and service delivery issues facing the government. This document is updated regularly and is used by departments and agencies as a guide. While these countries’ approaches differ in terms of specific CIO or CIO- type roles and responsibilities, in all cases the organization has responsibility for coordinating governmentwide implementation of e- government and providing leadership in the development of the government’s IT strategy and standards. As you know, the Congress is currently considering legislation to establish a federal CIO. Specifically, two proposals before this Subcommittee—H.R. 4670, the Chief Information Officer of the United States Act of 2000, and H.R. 5024, the Federal Information Policy Act of 2000—share a common call for central IT leadership from a federal CIO, although they differ in how the roles, responsibilities, and authorities of the position would be established. Several similarities exist in the two bills: Both elevate the visibility and focus of information resources and technology management by establishing a federal CIO who (1) is appointed by the President with the advice and consent of the Senate, (2) reports directly to the President, (3) is a Cabinet-level official, and (4) provides central leadership. The importance of such high level visibility should not be underestimated. Our studies of leading public and private- sector organizations have found that successful CIOs commonly are full members of executive management teams. Both leave intact OMB’s role and responsibility to review and ultimately approve agencies’ information technology funding requests for inclusion in the President’s budget submitted to the Congress each year. However, both require the federal CIO to review and recommend to the President and the Director of OMB changes to the IT budget proposals submitted by agencies. As we have previously testified before your Subcommittee, an integrated approach to budgeting and feedback is absolutely critical for progress in government performance and management.Certainly, close coordination between the federal CIO and OMB would be necessary to coordinate the CIO’s technical oversight and OMB’s budget responsibilities. Finally, both bills establish the existing federal CIO Council in statute. Just as with the Chief Financial Officers’ Council, there are important benefits associated with having a strong statutory base for the CIO Council. Legislative foundations transcend presidential administrations, fluctuating policy agendas, and the frequent turnover of senior appointees in the executive branch. Having congressional consensus and support for the Council helps ensure continuity of purpose over time and allows constructive dialogue between the two branches of government on rapidly changing management and information technology issues before the Council. Moreover, as prime users of performance and financial information, having the Council statutorily based can help provide the Congress with an effective oversight tool in gauging the progress and impact of the Council on advancing effective involvement of agency CIOs in governmentwide IT initiatives. The two bills also set forth duties that are consistent with, and expand upon, the duties of the current CIO Council. For example, the Council would be responsible for coordinating the acquisition and provision of common infrastructure services to facilitate communication and data exchange among agencies and with state, local, and tribal governments. While the bills have similarities, as a result of contrasting approaches, the two bills have major differences. In particular, H.R. 5024 vests in the federal CIO the information resources and technology management responsibilities currently assigned to OMB as well as oversight of related activities of the General Services Administration and promulgation of information system standards developed by the National Institute of Standards and Technology. On the other hand, H.R 4670 generally does not change the responsibilities of these agencies; instead it calls on the federal CIO to advise agencies and the Director of OMB and to consult with nonfederal entities, such as state governments and the private sector. Appendix I provides more detail on how information resources and technology management functions granted to the federal CIO compare among the two bills, and with OMB’s current responsibilities. Let me turn now to a few implementation issues associated with both of these bills. One such issue common to both is that effective implementation will require that appropriate presidential attention and support be given to the new federal CIO position and that adequate resources, including staffing and funding, be provided. As discussed below, each bill likewise has unique strengths and challenges. H.R. 4670: This bill creates an Office of Information Technology within the Executive Office of the President, headed by a federal CIO, with a limit of 12 staff. Among the duties assigned to the CIO are (1) providing leadership in innovative use of information technology, (2) identifying opportunities and coordinate major multi-agency information technology initiatives, and (3) consulting with leaders in information technology management in state governments, the private sector, and foreign governments. OMB’s statutory responsibilities related to information resources and technology management would remain largely unchanged under this bill. One strength of this bill is that it would allow a federal CIO to focus full- time attention on promoting key information technology policy and crosscutting issues within government and in partnership with other organizations without direct responsibility for implementation and oversight, which would remain the responsibility of OMB and the agencies. Moreover, the federal CIO could promote collaboration among agencies on crosscutting issues, adding Cabinet-level support to efforts now initiated and sponsored by the CIO Council. Further, the federal CIO could establish and/or buttress partnerships with state, local, and tribal governments, the private sector, or foreign entities. Such partnerships were key to the government’s Year 2000 (Y2K) success and could be essential to addressing other information technology issues, such as critical infrastructure protection, since private-sector systems control most of our nation’s critical infrastructures (e.g., energy, telecommunications, financial services, transportation, and vital human services). A major challenge associated with H.R. 4670’s approach, on the other hand, is that federal information technology leadership would be shared. While the CIO would be the President’s principal adviser on these issues, OMB would retain critical statutory responsibilities in this area. For example, both the federal CIO and OMB would have a role in overseeing the government’s IT and interagency initiatives. Certainly, it would be crucial for the OMB Director and the federal CIO to mutually support each other and work effectively together to ensure that their respective roles and responsibilities are clearly communicated. Without a mutually constructive working relationship with OMB, the federal CIO’s ability to achieve the potential improvements in IT management and cross-agency collaboration would be impaired. H.R. 5024: This bill establishes an Office of Information Policy within the Executive Office of the President and headed by a federal CIO. The bill would substantially change the government’s existing statutory information resources and technology management framework because it shifts much of OMB’s responsibilities in these areas to the federal CIO. For example, it calls for the federal CIO to develop and oversee the implementation of policies, principles, standards, and guidance with respect to (1) information technology, (2) privacy and security, and (3) information dissemination. A strength of this approach would be the single, central focus for information resources and technology management in the federal government. A primary concern we have with OMB’s current structure as it relates to information resources and technology management is that, in addition to their responsibilities in these areas, both the Deputy Director for Management and the Administrator of the Office of Information and Regulatory Affairs (OIRA) have other significant duties, which necessarily restrict the amount of attention that they can give to information resources and technology management issues.For example, much of OIRA is staffed to act on 3,000 to 5,000 information collection requests from agencies per year, review about 500 proposed and final rules each year, and to calculate the costs and benefits of all federal regulations. A federal CIO, like agency CIOs, should be primarily concerned with information resources and technology management. This bill would clearly address this concern. Another important strength of H.R. 5024 is that the federal CIO would be the sole central focus for information resources and technology management and could be used to resolve potential conflicts stemming from conflicting perspectives or goals within the executive branch agencies. In contrast, a major challenge associated with implementing H.R. 5024 is that by removing much of the responsibility for information resources and technology management from OMB, the federal CIO could lose the leverage associated with OMB’s budget-review role. A strong linkage with the budget formulation process is often a key factor in gaining serious attention for management initiatives throughout government, and reinforces the priorities of federal agencies’ management goals. Regardless of approach, we agree that strong and effective central information resources and technology management leadership is needed in the federal government. A central focal point such as a federal CIO can play the essential role of ensuring that attention in these areas is sustained. Increasingly, the challenges the government faces are multidimensional problems that cut across numerous programs, agencies, and governmental tools. Although the respective departments and agencies should have the primary responsibility and accountability to address their own issues—and both bills maintain these agency roles— central leadership has the responsibility to keep everybody focused on the big picture by identifying the agenda of governmentwide issues needing attention and ensuring that related efforts are complementary rather than duplicative. Another task facing central leadership is serving as a catalyst and strategist to prompt agencies and other critical players to come to the table and take ownership for addressing the agenda of governmentwide information resources and technology management issues. In the legislative deliberations on the Clinger-Cohen Act, we supported strengthened central management through the creation of a formal CIO position for the federal government.A CIO for the federal government could provide a strong, central point of coordination for the full range of governmentwide information resources management and technology issues, including (1) reengineering and/or consolidating interagency or governmentwide process and technology infrastructure; (2) managing shared assets; and (3) evaluating attention, progress evaluations, and assistance provided to high-risk, complex information systems modernization efforts. In particular, a federal CIO could provide sponsorship, direction, and sustained focus on the major challenges the government is facing in areas such as critical infrastructure protection and security, e-government, and large-scale IT investments. For example, to be successful, e-government initiatives designed to improve citizen access to government must overcome some of the basic challenges that have plagued information systems for decades – lack of executive level sponsorship, involvement, and controls; inadequate attention to business and technical architectures; adherence to standards; and security. In the case of e-government, a CIO could (1) help set priorities for the federal government; (2) ensure that agencies consider interagency web site possibilities, including how best to implement portals or central web access points that provide citizens access to similar government services; and (3) help establish funding priorities, especially for crosscutting e-government initiatives. The government’s success in combating the Year 2000 problem demonstrated the benefit of strong central leadership. As our Year 2000 lessons learned report being released today makes clear, the leadership of the Chair of the President’s Council on Year 2000 Conversion was invaluable in combating the Year 2000 problem.Under the Chair’s leadership, the government’s actions went beyond the boundaries of individual programs or agencies and involved governmentwide oversight, interagency cooperation, and cooperation with partners, such as state and local governments, the private sector, and foreign governments. It is important to maintain this same momentum of executive-level attention to information management and technology decisions within the federal government. The information issues confronting the government in the new Internet-based technology environment rapidly evolve and carry significant impact for future directions. A federal CIO could maintain and build upon Y2K actions in leading the government’s future IT endeavors. Accordingly, our Y2K lessons learned report calls for the Congress to consider establishing a formal chief information officer position for the federal government to provide central leadership and support. Consensus has not been reached within the federal community on the need for a federal CIO. Department and agency responses to questions developed by the Chairman and Ranking Minority Member of the Senate Committee on Governmental Affairs regarding opinions about the need for a federal CIO found mixed reactions. In addition, at our March 2000 Y2K Lessons Learned Summit, which included a broad range of public and private-sector IT managers and policymakers, some participants did not agree or were uncertain about whether a federal CIO was needed. Further, in response to a question before this Subcommittee on the need for a federal IT leader accountable to the President, the Director of OMB stated that OMB’s Deputy Director for Management, working with the head of the Office of Information and Regulatory Affairs, can be expected to take a federal information technology leadership role. The Director further stated that he believed that “the right answer is to figure out how to continue to use the authority and the leadership responsibilities at the Office of Management and Budget to play a lead role in this area.” In conclusion, Mr. Chairman, the two bills offered by members of this Subcommittee both deal with the need for central leadership, while addressing the sharing of responsibilities with OMB in different ways. Both bills offer different approaches to problems that have been identified and should be dealt with in order to increase the government’s ability to use the information resources at its disposal effectively, securely, and with the best service to the American people. Regardless of approach, a central focal point such as a federal CIO can play the essential role of ensuring that attention to information technology issues is sustained. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other members of the Subcommittee may have at this time. For information about this testimony, please contact me at (202) 512-6240 or by e-mail at mcclured.aimd@gao.gov. Individuals making key contributions to this testimony include John Christian, Lester Diamond, Tamra Goldstein, Linda Lambert, Thomas Noone, David Plocher, and Tomas Ramirez. OMB’s Current FunctionsDevelop, as part of the budget process, a mechanism for analyzing, tracking, and evaluating the risks and results of all major capital investments made by an executive agency for information systems. Review and recommend to the President and the Director of OMB changes to budget and legislative proposals of agencies. Review and recommend to the President and the Director of OMB changes to budget and legislative proposals of agencies. Implement periodic budgetary reviews of agency information resources management activities to ascertain efficiency and effectiveness of IT in improving agency mission performance. Advise and assist the Director of OMB in developing, as part of the budget process, a mechanism for analyzing, tracking, and evaluating the risks and results of all major capital investments made by an executive agency for information systems. Take actions through the budgetary and appropriations management process to enforce agency accountability for information resources management and IT investments, including the reduction of funds. Implement periodic budgetary reviews of agency information resources management activities to ascertain efficiency and effectiveness of IT in improving agency mission performance. Serves as the Chairperson of the CIO Council, established by the bill in statute. Request that the Director of OMB take action, including involving the budgetary or appropriations management process, to enforce agency accountability for information resources management and IT investments, including the reduction of funds. Serves as the Chairperson of the CIO Council, established by the bill in statute. The Deputy Director for Management serves as the Chairperson of the CIO Council, which was created by Executive Order. In consultation with the Administrator of the National Telecommunications and Information Administration, develop and implement procedures for the use and acceptance of electronic signatures by agencies by April 21, 2000. Advise the Director of OMB on electronic records. In consultation with the Director of OMB and the Administrator of the National Telecommunications and Information Administration, develop and implement procedures for the use and acceptance of electronic signatures by agencies by October 1, 2000. OMB’s Current Functionspracticable. Develop and implement procedures to permit private employers to store and file electronically with agencies forms containing information pertaining to the employees of such employers. In consultation with the Director of OMB, develop and implement procedures to permit private employers to store and file electronically with agencies forms containing information pertaining to the employees of such employers. In consultation with the Administrator of the National Telecommunications and Information Administration study and periodically report on the use of electronic signatures. In consultation with the Director of OMB and the Administrator of the National Telecommunications and Information Administration study and periodically report on the use of electronic signatures. Provide direction and oversee activities of agencies with respect to the dissemination of and public access to information. Advise the Director of OMB on information dissemination. Assisted by the CIO Council and others, monitor the implementation of the requirements of the Government Paperwork Elimination Act, the Electronic Signatures in Global and National Commerce Act and related laws. Provide direction and oversee activities of agencies with respect to the dissemination of and public access to information. Foster greater sharing, dissemination, and access to public information. Foster greater sharing, dissemination, and access to public information. Develop and oversee the implementation of policies, principles, standards, and guidance with respect to information dissemination. Develop and oversee the implementation of policies, principles, standards, and guidance with respect to information dissemination. Cause to be established and oversee an electronic Government Information Locator Service (GILS). Develop, coordinate, and oversee the implementation of uniform information resources management policies, principles, standards, and guidelines. Cause to be established and oversee an electronic GILS. Advise the Director of OMB on information resources management policy. Develop, coordinate, and oversee the implementation of uniform information resources management policies, principles, standards, and guidelines. OMB’s Current Functionsimplementation of best practices in information resources management. implementation of best practices in information resources management. Oversee agency integration of program and management functions with information resources management functions. Oversee agency integration of program and management functions with information resources management functions. In consultation with the Administrator of General Services, the Director of the National Institute of Standards and Technology, the Archivist of the United States, and the Director of the Office of Personnel Management, develop and maintain a governmentwide strategic plan for information resources management. In consultation with the Director of OMB, the Administrator of General Services, the Director of the National Institute of Standards and Technology, the Archivist of the United States, the Director of the Office of Personnel Management, and the CIO Council, develop and maintain a governmentwide strategic plan for information resources management. Initiate and review proposals for changes in legislation, regulations, and agency procedures to improve information resources management practices. Initiate and review proposals for changes in legislation, regulations, and agency procedures to improve information resources management practices. Monitor information resources management training for agency personnel. Monitor information resources management training for agency personnel. Keep the Congress informed on the use of information resources management best practices to improve agency program performance. Keep the Congress informed on the use of information resources management best practices to improve agency program performance. Periodically review agency information resources management activities. Periodically review agency information resources management activities. Report annually to the Congress on information resources management. Serve as the principal adviser to the President on matters relating to the development, application, and management of IT by the federal government. OMB’s Current Functionspolicies, principles, standards, and guidelines for IT functions and activities. Ensure that agencies integrate information resources plans, program plans, and budgets for acquisition and use of technology. Advise the President on opportunities to use IT to improve the efficiency and effectiveness of programs and operations of the federal government. information resources by the federal government. Advise the Director of OMB on IT management. Develop and oversee the implementation of policies, principles, standards, and guidelines for IT functions and activities, in consultation with the Secretary of Commerce and the CIO Council. Provide direction and oversee activities of agencies with respect to the acquisition and use of IT. Report annually to the President and the Congress on IT management. Promote the use of IT by the federal government to improve the productivity, efficiency, and effectiveness of federal programs. Promulgate, in consultation with the Secretary of Commerce, standards and guidelines for federal information systems. Promote agency investments in IT that enhance service delivery to the public, improve cost-effective government operations, and serve other objectives critical to the President. Oversee the effectiveness of, and compliance with, directives issued under section 110 of the Federal Property and Administrative Services Act (which established the Information Technology Fund). Review the federal information system standards setting process, in consultation with the Secretary of Commerce, and report to the President. Direct the use of the Information Technology Fund by the Administrator of General Services. Provide advice and assistance to the Administrator of the Office of Federal Procurement Policy regarding IT acquisition. Coordinate OIRA policies regarding IT acquisition with the Office of Federal Procurement Policy. Consult with leaders in state governments, the private sector, and foreign governments. Oversee the development and implementation of computer system standards and guidance issued by the Secretary of Commerce through the National Institute of Standards and Technology. Ensure that agencies integrate information resources plans, program plans, and budgets for acquisition and use of technology. Provide direction and oversee activities of agencies with respect to the acquisition and use of IT. Designate agencies, as appropriate, to be executive agents for governmentwide acquisitions of IT. Promote the use of IT by the federal government to improve the productivity, efficiency, and effectiveness of federal programs. Compare agency performance in using IT. Encourage use of performance- based management in complying with IT management requirements. Establish minimum criteria within 1 year of enactment to be used for independent evaluations of IT programs and management processes. OMB’s Current Functionsrespect to the performance of investments made in IT. Direct agencies to develop capital planning processes for managing major IT investments. Services with regard to the provision of any information resources-related services for or on behalf of agencies, including the acquisition or management of telecommunications or other IT or services. Direct agencies to analyze private sector alternatives before making an investment in a new information system. Direct the use of the Information Technology Fund by the Administrator of General Services. Direct agencies to undertake an agency mission reengineering analysis before making significant investments in IT to support these missions. Oversee the effectiveness of, and compliance with, directives issued under section 110 of the Federal Property and Administrative Services Act (which established the Information Technology Fund). Oversee the development and implementation of computer system standards and guidance issued by the Secretary of Commerce through the National Institute of Standards and Technology. Designate agencies, as appropriate, to be executive agents for governmentwide acquisitions of IT. Compare agency performance in using IT. Encourage use of performance- based management in complying with IT management requirements. Evaluate agency practices with respect to the performance of investments made in IT. Direct agencies to develop capital planning processes for managing major IT investments. OMB’s Current Functionssystem. Conduct pilot projects with selected agencies and nonfederal entities to test alternative policies and practices. Assess experiences of agencies, state and local governments, international organizations, and the private sector in managing IT. Provide leadership in the innovative use of technology by agencies through support of experimentation, testing, and adoption of innovative concepts and technologies, particularly with regard to multi- agency initiatives. Direct agencies to undertake an agency mission reengineering analysis before making significant investments in IT to support these missions. Conduct pilot projects with selected agencies and nonfederal entities to test alternative policies and practices. Provide leadership in the innovative use of technology by agencies through support of experimentation, testing, and adoption of innovative concepts and technologies, particularly with regard to multi- agency initiatives. Ensure the efficiency and effectiveness of interagency IT initiatives. Identify opportunities and coordinate major multiagency IT initiatives. Assess experiences of agencies, state and local governments, international organizations, and the private sector in managing IT. Ensure the efficiency and effectiveness of interagency IT initiatives. Issue guidance to agencies regarding interagency and governmentwide IT investments to improve the accomplishment of common missions and for the multiagency procurement of commercial IT items. Apply capital planning, investment control, and performance management requirements to national security systems to the extent practicable. Consult with the heads of agencies that operate national security systems. Issue guidance to agencies regarding interagency and governmentwide IT investments to improve the accomplishment of common missions and for the multiagency procurement of commercial IT items. Consult with the heads of agencies that operate national security systems. Review agency collections of information to reduce paperwork burdens on the public. Advise the Director of OMB on paperwork reduction. Apply capital planning, investment control, and performance management requirements to national security systems to the extent practicable. Provide advice and assistance to agencies and to the Director of OMB to promote efficient collection of information and the reduction of paperwork burdens on the public. OMB’s Current FunctionsProvide direction and oversee activities of agencies with respect to privacy, confidentiality, security, disclosure, and sharing of information. Advise the Director of OMB on privacy, confidentiality, security, disclosure, and sharing of information. Provide direction and oversee activities of agencies with respect to privacy, confidentiality, security, disclosure, and sharing of information. Develop and oversee the implementation of policies, principles, standards, and guidelines on privacy, confidentiality, security, disclosure and sharing of agency information. Develop and oversee the implementation of policies, principles, standards, and guidelines on privacy, confidentiality, security, disclosure and sharing of agency information. Oversee and coordinate compliance with the Privacy Act, the Freedom of Information Act, the Computer Security Act, and related information management laws. Oversee and coordinate compliance with the Privacy Act, the Freedom of Information Act, the Computer Security Act, and related information management laws. Require federal agencies, consistent with the Computer Security Act, to identify and afford security protections commensurate with the risk and magnitude of the harm resulting from the loss, misuse, or unauthorized access to or modification of agency information. Require federal agencies, consistent with the Computer Security Act, to identify and afford security protections commensurate with the risk and magnitude of the harm resulting from the loss, misuse, or unauthorized access to or modification of agency information collected or maintained. Review agency computer security plans required by the Computer Security Act. Oversee agency compliance with the Privacy Act. Establish governmentwide policies for promoting risk-based management of information security as an integral component of each agency’s business operations. Direct agencies to use best security practices, develop an agencywide security plan, and apply information security requirements throughout the information system life cycle. Review agency computer security plans required by the Computer Security Act. Oversee agency compliance with the Privacy Act. OMB’s Current FunctionsProvide direction and oversee activities of agencies with respect to records management activities. Advise the Director of OMB on records management. Provide direction and oversee activities of agencies with respect to records management activities. Provide advice and assistance to the Archivist of the United States and the Administrator of General Services to promote coordination of records management with information resources management requirements. Provide advice and assistance to the Archivist of the United States and the Administrator of General Services to promote coordination of records management with information resources management requirements. Review agency compliance with requirements and regulations. Review agency compliance with requirements and regulations. Oversee the application of records management policies, principles, standards, and guidelines in the planning and design of information systems. Provide direction and oversee activities of agencies with respect to statistical activities. Advise the Director of OMB on statistical policy and coordination. Oversee the application of records management policies, principles, standards, and guidelines in the planning and design of information systems. Provide direction and oversee activities of agencies with respect to statistical activities. Coordinate the activities of the federal statistical system. Coordinate the activities of the federal statistical system. Ensure that agency budget proposals are consistent with systemwide priorities for maintaining and improving the quality of federal statistics. Consult with the Director of OMB to ensure that agency budget proposals are consistent with systemwide priorities for maintaining and improving the quality of federal statistics. Develop and oversee governmentwide statistical policies, principles, standards, and guidelines. Develop and oversee governmentwide statistical policies, principles, standards, and guidelines. Evaluate statistical program performance and agency compliance with governmentwide statistical policies, principles, standards, and guidelines. Evaluate statistical program performance and agency compliance with governmentwide statistical policies, principles, standards, and guidelines. Promote the sharing of information collected for statistical purposes. Promote the sharing of information collected for statistical purposes. Coordinate U.S. participation in international statistical activities. OMB’s Current Functionsinternational statistical activities. Establish an Interagency Council on Statistical Policy, headed by an appointed chief statistician. Establish an Interagency Council on Statistical Policy, headed by an appointed chief statistician. Provide opportunities for training in statistical policy. Provide opportunities for training in statistical policy. H.R. 4670 specifically authorizes the CIO to advise the Director of OMB to “ensure effective implementation of the functions and responsibilities assigned under chapter 35 of title 44, United States Code.” These functions include electronic records (through the Government Paperwork Elimination Act of 1998), information dissemination, information resources management policy, information technology management, paperwork reduction, privacy and security, records management, and statistical policy and coordination. (512023)
Pursuant to a congressional request, GAO discussed the creation of a federal chief information officer (CIO), focusing on the: (1) structure and responsibilities of existing state and foreign governmentwide CIO models; (2) federal CIO approaches proposed by two bills; and (3) type of leadership responsibilities that a federal CIO should possess. GAO noted that: (1) GAO has not evaluated the effectiveness of state and foreign government CIOs or equivalent positions--however, these positions appear to apply some of the same principles outlined in GAO's CIO executive guide; (2) state CIO are usually in charge of developing statewide information technology (IT) plans and approving statewide IT standards, budgets, personnel classifications, salaries, and resource acquisitions; (3) national governments in other countries have also established a central IT coordinating authority and have different implementation approaches in doing so; (4) Congress is considering legislation to establish a federal CIO; (5) two proposals--H.R. 4670, the Chief Information Officer of the United States Act of 2000, and H.R. 5024, the Federal Information Policy Act of 2000--share a common call for central IT leadership from a federal CIO, although they differ in how the roles, responsibilities, and authorities of the position would be established; (6) regardless of approach, strong and effective central information resources and technology management leadership is needed in the federal government; (7) a central focal point such as a federal CIO can play the essential role of ensuring that attention in these areas is sustained; (8) although the respective departments and agencies should have the primary responsibility and accountability to address their own issues--and both bills maintain these agency roles--central leadership has the responsibility to keep everybody focused on the big picture by identifying the agenda of governmentwide issues needing attention and ensuring that related efforts are complementary rather than duplicative; (9) another task facing central leadership is serving as a catalyst and strategist to prompt agencies and other critical players to come to the table and take ownership for addressing the agenda of governmentwide information resources and technology management issues; (10) a federal CIO could provide sponsorship, direction, and sustained focus on the major challenges the government is facing in areas such as critical infrastructure protection and security, e-government, and large-scale IT investments; and (11) consensus has not been reached within the federal community on the need for a federal CIO.
Health care quality measures are standard, evidence-based metrics designed to assess the performance of health care providers, such as hospitals, in providing care. These measures are intended to (1) inform providers about opportunities for potential improvements in their delivery of care, (2) incentivize providers to consistently provide high quality care, and (3) inform consumers about which providers are most likely to deliver high quality care. There are broad categories of clinical quality measures that address various aspects of quality of care. See table 1 for a description of these broad categories of quality measures. These broad measure categories can be further broken down into more specific groups of related measures. For example, outcome measures can include measures of patient safety, such as the incidence of healthcare-associated infections (HAI) or complications, as well as measures of hospital readmissions and mortality, and results obtained from ambulatory care, such as the proportion of patients with hypertension whose blood pressure is reduced to the normal range. The data used to calculate the results of health care quality measures can come from a number of different sources. Some measures often require detailed clinical information obtained from patient medical records, such as process measures that indicate whether timely and effective care was provided in a specific situation, for example, or whether stroke patients received clot-dissolving medication appropriately. Other measures are designed to use information on patient demographics and diagnoses that can be obtained from more readily accessible sources, such as claims data or other administrative data that have already been collected for other purposes such as billing. In addition, patients can be asked directly, usually through surveys, to report on their experiences receiving care. One key method for disseminating information on health care quality measures to consumers—including which providers are delivering high or low quality care and the costs of care—is through websites that can convey this information to anyone with internet access. HHS is one of the organizations that provide such information to consumers and others. Specifically, HHS’s Centers for Medicare & Medicaid Services (CMS) maintains a series of websites that provide information on health care quality, including a series of websites for hospitals, nursing homes, and certain other providers that participate in the Medicare program. Since 2005, CMS has increased the number of health care quality measures it posts on one of its websites, known as Hospital Compare, that covers more than 4,000 hospitals that participate in the Medicare program. These hospitals supply data to CMS for quality measures of inpatient and outpatient care in return for higher payments on their Medicare claims. Each year HHS goes through a formal process, including receiving input from experts and stakeholders, to review and revise the mix of quality measures that these hospitals report. The Hospital Compare website allows anyone with internet access to select up to three hospitals to compare their performance on each of these measures side-by-side. CMS uses contractors to collect and process the data submitted by individual hospitals, and it posts the results on each of the quality measures on the Hospital Compare website. In implementing a requirement under the Choice Act, VA reports some of the 79 possible health care quality measures reported by non-VA hospitals on HHS’s Hospital Compare website. As of June 2017, VA was reporting 35 measures that it had determined were applicable to its individual medical centers. Of the remaining 44 measures that non-VA hospitals report on Hospital Compare, VA plans on reporting on 12 measures in future years and does not plan to report 32 measures. See table 2 for a summary of VA’s reporting of quality measures on Hospital Compare. The 35 measures VA reported on Hospital Compare as of June 2017 include all of the available patient experience measures (e.g., patient perspectives on how well physicians communicated with them), most of the process measures (e.g., whether stroke patients received appropriate clot-dissolving drugs), and some of the outcome measures (e.g., whether pneumonia patients were readmitted to the hospital within 30 days). (See app. I for more detailed information on the specific measures VA reports on Hospital Compare.) VA officials told us that when they began to report quality measures to Hospital Compare in 2010, the measures they first decided to report included measures that VA was already reporting for other purposes. For example, these included process measures of timely and effective inpatient care that VA reported for Joint Commission hospital accreditation. According to VA officials, choosing quality measures it already reported for other purposes minimized the additional resources needed to report the measures on Hospital Compare. From 2017 through 2019, VA plans on reporting an additional 11 outcome measures on Hospital Compare. These include 6 measures of various HAIs at VAMCs, 4 measures of mortality and readmission rates associated with additional medical conditions and procedures, such as stroke and hip/knee replacement surgery, and 1 measure related to patient safety. Specifically: Regarding the 6 additional HAI measures, VA officials told us that they will report these measures when they can develop a new data collection process that will allow them to meet the requirements for reporting HAI measures in a way that minimizes demands on VA resources. According to VA officials, complying with the existing process for collecting data for and reporting on HAIs would require infection control staff at each VAMC to fill out forms with information on individual patients, whereas VA currently collects its own information on HAIs based on aggregated data assembled by VA Central Office staff. VA officials told us that their new data collection process for reporting HAI measures is intended to meet those Hospital Compare requirements through automating much of the required data entry at the VAMCs. However, VA officials noted that this new reporting process is still in an early stage of development, and they expressed uncertainty about how long it would take to implement the process. Regarding the 4 additional mortality and readmission measures, CMS faces challenges in integrating VA’s clinical information into the Hospital Compare database. According to both VA and CMS officials, VA has had to implement new readmissions and mortality rate measures incrementally due to the limited capacity of one of CMS’s Hospital Compare contractors to develop the programming needed to calculate these measures using VA patient data. Because the readmissions and mortality rate measures require complex programming to implement risk adjustments, based on a number of different diagnoses recorded in patient medical records over time, VA and CMS officials agreed to add only two new measures each year until VA can report all 4 measures. There are another 32 measures non-VA hospitals report on the Hospital Compare website that VA has no plans to report. VA officials determined that these measures are not relevant for VA’s health care system, given its distinctive funding sources, patient population, and health care delivery structure. For example, VA officials stated that they did not plan to report any of the Hospital Compare measures related to costs of care, such as Medicare spending per beneficiary, because those measures are based on Medicare payments and VA does not receive any payments from Medicare. In other cases, VA officials do not plan on reporting measures that relate to health care services that VAMCs rarely, if ever, provide, such as obstetrics and hip and knee replacements. In implementing a requirement under the Choice Act, VA has posted on the Hospital Compare website a link to the notification on the quality measures VA is not reporting. The most recent notice, dated February 2015, broadly explains that VA faces three main challenges that affect the availability of certain measures. According to the notice, these challenges relate to data quality, standard data collection processes, and funding to support the collecting and reporting of quality measures. VA officials told us that they plan on adding to the notification information about the specific measures VA expects to report in the future and when it expects to report them. As of June 2017, VA reported on its own website 110 health care quality measures for its VAMCs. These include measures in many of the same categories that VA reports on Hospital Compare, but they also include several additional categories of quality measures that are not available on Hospital Compare. These additional categories address quality issues VA officials deem relevant for veterans, such as various measures of access to care. For example, the additional measures include measures of how long veterans must wait to obtain care at VAMCs and measures of the quality of care related to ambulatory care, such as colorectal cancer screening rates. See Table 3 for the categories of measures VA reports on its website. Some of the specific measures reported on VA’s website are the same as those reported on Hospital Compare and some are entirely different. For example, most of the patient experience measures reported on VA’s website are the same as those reported on Hospital Compare, while the access to care measures are only reported on VA’s website. In addition, some measures reported on VA’s website are similar to, but not exactly the same as, Hospital Compare measures. For example, among similar measures, the target population may be defined somewhat differently or the result may be calculated differently. (See app. I for more information on the specific measures VA reports on its website.) In implementing a requirement under the Choice Act, VA has also posted a notification of unavailable measures on its website. This is the same February 2015 notice VA provided on the Hospital Compare website. VA publicly reports 110 health care quality measures on two separate webpages on its website—the “Access and Quality” webpage, launched in April 2017, and the “Quality of Care” webpage, which VA has used since 2008. The Access and Quality webpage is intended to be the primary source of information on quality of care at VA for veterans, according to VA officials. VA officials also told us that this webpage was developed to present quality of care information in ways that are easy for veterans and other stakeholders to understand. A link to the primary Access and Quality webpage can be found on the homepage of VA’s website, making it relatively easy to find. VA officials also told us that they retained the older Quality of Care webpage in an effort to be transparent as well as provide historical data on the many quality measures it has tracked. The older Quality of Care webpage is not linked to the homepage of VA’s website. Additionally, neither the primary Access and Quality webpage nor the older Quality of Care webpage provides any link or makes any mention of the other. See fig. 1 and fig. 2, which show the primary Access and Quality and older Quality of Care webpages, respectively. On the two webpages, VA reports its 110 health care quality measures across various subpages, with some measures reported on multiple subpages. Specifically, as of June, 2017: VA reported 15 of its 110 quality measures on the primary Access and Quality webpage, which comprises three subpages. Two subpages focus on measures related to access (which we refer to as the “Wait Times” and “Experience With Access” subpages) and the third subpage compares how VAMCs perform relative to non-VA hospitals in their geographic area on a few selected Hospital Compare measures (which we refer to as the “Non-VA Hospital Comparison” subpage). According to VA officials, the 15 quality of care measures on the primary Access and Quality webpage were selected because they provide information that is useful to veterans in making health care choices. On the older Quality of Care webpage, which comprises four subpages, VA reported 100 quality measures. According to VA officials, each of the four subpages of this webpage was created for a specific purpose and as a result, some measures are reported on multiple subpages. We also found that VA reported five of the same quality measures on both the primary and older webpages, which include measures reflecting patient ratings of their experience in the hospital and healthcare-associated infection rates. VA officials told us that they do not plan on consolidating the information currently reported on the two webpages since they serve different purposes, as explained earlier. See Table 4 for a summary of the health care quality measures reported on VA’s Access and Quality and Quality of Care webpages. We found that VA’s primary Access and Quality webpage generally reports health care quality of care information in ways that are more accessible and understandable than VA’s older Quality of Care webpage. We assessed both webpages using criteria we identified in prior work for evaluating how well websites present information on health care quality to the public. Specifically, we found that VA’s primary webpage meets four of six presentation criteria compared with VA’s older webpage, which met none of the criteria. (See table 5.) We found that VA’s primary webpage does especially well at presenting information on quality of care on the two subpages that focus on access to care. In particular, these subpages: provide information written in plain language with accompanying enable consumers to customize the information presented, so that they can, for example, select the types of medical appointments of interest to them; and allow users to rank order VAMCs by level of performance on a given quality measure. In contrast with VA’s primary webpage, we found that VA’s older Quality of Care webpage displays quality measures in ways that generally do not meet any of the presentation criteria. In particular, we noted that the information presented across the four subpages on the Quality of Care webpage shares the following key limitations: none are written in plain language or use graphics to convey key none summarize related quality information or organize the data to show patterns, such as rank ordering VAMCs on a given performance measure; none enables comparison of multiple VAMCs in one view, but instead requires users to look up information on VAMCs one at a time; none enables customization of how the information is displayed so that users can focus on the quality measures most relevant to them; and none are advertised to potential users, with no indication provided on the VA website homepage that the subpages exist, what information they provide, and where to find them. Representatives from two of the three VSOs we spoke with said that the information on the Quality of Care webpage is not utilized by veterans because links to the webpage are not prominently displayed on the VA website. According to one VSO official we spoke with, when VA sought feedback from the veterans service organizations about its overall plans for its primary Access and Quality webpage, the official found that the main advantage of this webpage is that it presents information in a more visually compelling way that allows veterans to directly compare quality information for multiple VAMCs. According to officials from the three VSOs with whom we spoke, this capability is important because veterans can be overwhelmed by information that is not conveyed in an easily digestible format. We also assessed VA’s webpages using criteria we identified in prior work for evaluating the extent to which websites provide consumers with information relevant for making health care decisions. Using these criteria, we found that VA’s primary Access and Quality webpage does not provide veterans as much relevant information as VA’s older Quality of Care webpage. Our analysis shows that VA’s primary webpage meets only two of the seven relevance criteria. In contrast, we found that VA’s older webpage performs relatively well on the relevance criteria—meeting six of the seven criteria. (See table 6.) As Table 6 shows, VA’s primary webpage does not provide information on a broad range of health care services, nor does it highlight key differences in clinical quality of care. For example, the primary webpage reports only on the incidence of two types of HAIs and not on any other types of outcomes, such as readmissions and mortality rates associated with different medical conditions. Additionally, VA’s primary webpage only provides limited information about key differences in patient experiences and the key strengths and limitations of the data reported. Although VA officials told us that they expect to expand the number of reported measures in future and intend to make the webpage more comprehensive, they did not identify specific measures to be implemented. In contrast, VA’s Quality of Care webpage performs better on the relevance criteria primarily because its four subpages together provide information on 100 different measures spread across a broader range of measure categories (see tables 4 and 6). As a result, it provides information on a broad range of services and on key differences in clinical quality of care. Furthermore, representatives of two veterans service organizations told us that much of the information reported on VA’s older webpage would be relevant to veterans, including rates of hospital readmissions, mortality, and complications—none of which are included on VA’s primary webpage. While VA has improved how it presents information on quality to veterans through its primary Access and Quality webpage, there are still gaps in the relevance of information reported on the webpage. Specifically, while it may not be necessary for VA to report on all 110 measures on its primary webpage to meet the relevance criteria, the 15 measures reported on the Access and Quality webpage (of which 10 focus on access) clearly do not provide the same breadth of relevant information relevant for veterans. VA officials told us that they began with these 15 quality measures that they deemed most useful to veterans for making health care choices, and in the short term, they are focused on improving the user experience of the webpage. However, until VA’s website provides information on a broader range of health care services, highlights key differences in clinical quality of care, and reports this information in a manner that is easily accessible and understandable, VA is missing an opportunity to provide veterans with relevant quality information that it has already collected to help veterans make informed decisions about their care. Studies and other evidence we reviewed indicate potential problems with the completeness and accuracy of the clinical information in patient records that is used to calculate VA’s publicly reported health care quality measures on Hospital Compare and its own website. Moreover, VA Central Office—which has responsibility for calculating and reporting health care quality measures for each VAMC—has not systematically assessed the completeness and accuracy of this clinical information across its VAMCs and the effects, if any, on the accuracy of its health care quality measures. Studies and other evidence we reviewed indicate potential problems with the completeness and accuracy of the clinical information recorded in patient medical records at some VAMCs (e.g., diagnoses given and treatments received). This is significant because the accuracy of VAMCs’ performance on quality measures that are calculated and reported by VA on Hospital Compare and its own website depends on the completeness and accuracy of this clinical information. For example, VA reports readmission measures on Hospital Compare and on its website that compare VAMCs in terms of the proportion of their patients with a medical condition, such as heart failure, who are readmitted to a hospital within 30 days of an initial inpatient stay. Because VAMCs as well as non-VA hospitals that treat healthier patients are likely to have lower readmission rates independent of the quality of care that they provide, these readmission measures incorporate information about whether a hospital’s patients have certain other diagnoses that indicate their overall level of health. However, the calculation of hospital readmission rates will not work as intended if VAMCs do not record in their medical records complete and accurate information on patient diagnoses. In particular, if some VAMCs record that information more completely and accurately than others, that may distort any comparison of the readmission rates among the VAMCs. However, according to an independent assessment of VA’s clinical documentation procedures conducted by McKinsey & Company in 2015, VA as a whole is below industry standards. The independent assessment’s report looked specifically at VA’s implementation of clinical documentation improvement (CDI) programs, which typically provide a combination of provider education and assessments of provider performance to promote more complete and accurate clinical documentation. The independent assessment noted that such CDI programs have been widely adopted by hospitals across the U.S. health care system and play a critical role in producing more complete and accurate clinical information. However, the assessment found that only 62 of 134 VAMCs examined had a CDI program as of 2014. The number of VAMCs with CDI programs has increased since the study was completed; VA officials reported to us that as of July 2017, 99 VAMCs had a CDI program in place, and an additional 11 VAMCs are planning to implement a CDI program. Moreover, the independent assessment found evidence suggesting that deficiencies in the completeness and accuracy of the clinical information recorded in VA patient medical records may have affected the assessed quality performance of at least some VAMCs. Specifically, the independent assessment cited multiple instances where VAMC officials observed that their facility’s assessed performance on VA’s quality measures markedly improved after the facility took steps to improve the completeness and accuracy of the clinical information recorded in the VAMC’s patient records. We could not quantify the potential effect of incomplete and inaccurate clinical information on VAMC assessed quality performance without conducting an audit of the clinical information recorded at each VAMC. Other components of the independent assessment also highlighted longstanding issues with VA’s electronic health records (EHR) system, which stores the clinical information used to calculate VAMC performance on the quality measures. The summary report for all 12 independent assessments noted that VA’s EHR system is outdated. The EHRs used at each VAMC do not consistently use standard data elements and algorithms for recording clinical information, and this variability in underlying clinical information from one VAMC to the next has led to an inability to convey consistent and complete information across the VA health care system. In 2016, the Commission on Care determined that the deficiencies in VA’s clinical documentation were so pervasive that it recommended that VA procure an entirely new EHR system that would replace its increasingly obsolete EHR system. These findings are similar to other studies conducted by the VA Office of Inspector General (OIG) that found deficiencies in VAMCs’ operations that can lead to the recording of incomplete and inaccurate patient information. For example, in July 2017, a VA OIG investigation of colonoscopy practices concluded that more accurate and stringent clinical data collection would enable VA to improve its monitoring of the quality of providers’ colonoscopies. Additionally, in May 2012, the OIG found that some VAMCs do not consistently conduct reviews intended to ensure that their clinicians properly enter information into EHRs, including ensuring the appropriate use of cut and paste functions. In April 2010, the OIG also found that VAMC emergency departments do not consistently document all of the information that they are required to report when transferring patients to other facilities. Furthermore, as one VA official observed to us, because of the way in which they are funded, VAMCs do not have the same financial incentives that non-VA hospitals do to ensure that the clinical information stored in patient medical records is complete and accurate. In addition, because most of VA health care is funded through direct appropriations, rather than by payments of claims filed with an insurance company or with government programs such as Medicare, VA does not routinely produce the claims-based data on health care services provided to patients that non-VA hospitals and clinics typically generate in the course of doing business. Thus, to calculate VAMC performance on quality measures, VA officials told us that VA has to extract data from its patient medical records specifically for this purpose. In contrast, non-VA hospitals routinely collect this information—such as data on readmissions and mortality—as part of their claims processing and reimbursement from Medicare and other health care payers. Within VA, VA Central Office is responsible for calculating and reporting the health care quality measures that VA publicly reports for each of its VAMCs and ensuring that these measures provide accurate information on the quality of care at these facilities. However, while studies and other evidence we reviewed indicate potential problems with the clinical information recorded at some VAMCs, VA Central Office has not determined the extent to which these problems exist across VAMCs and adversely affect the accuracy of the quality measures VA publicly reports on Hospital Compare and its own website. Specifically, VA Central Office has not conducted a systematic assessment of the completeness and accuracy of the clinical data recorded in VA patient medical records across VAMCs. (There are models for this type of assessment; see app. III for examples of methodologies that have been applied to different types of medical record systems.) When asked why they have not conducted such an assessment, VA Central Office officials told us that they have focused instead on improving the accuracy of medical coding and on specific clinical quality issues identified at individual VAMCs. However, accurate medical coding depends on the completeness and accuracy of the underlying clinical information in patient records. Moreover, VA Central Office policy assigns responsibility for monitoring the completeness and accuracy of patients’ clinical information to each individual VAMC. Each VAMC has a Health Record Review Committee that is charged with monitoring the clinical documentation practices at that facility. This committee determines the frequency of medical record reviews at its facility, decides what focused reviews should occur, and identifies clinicians that record clinical information poorly until they improve to an acceptable rate of completeness and accuracy. According to officials in VA’s Central Office who are responsible for health information management, the results of individual VAMC Health Record Review Committee reviews are not reported to VA Central Office. As a result, VA Central Office lacks information that could help it systematically determine whether or to what extent VAMCs’ clinical information is incomplete and inaccurate, and if so, the extent to which these deficiencies affect VAMCs’ reported performance on the quality measures VA publicly reports. The results of such a systematic analysis could also help identify the deficiencies, if any, in the recording of patient clinical information and what steps, if any, VA Central Office may need to take to address them. VA Central Office’s lack of a systematic assessment of the completeness and accuracy of clinical information recorded in patient medical records and the extent to which this affects the accuracy of its quality measures is inconsistent with federal standards for internal controls related to information and monitoring. These standards call for agencies to use accurate information to achieve objectives, and to monitor agency activities and evaluate results. Without a systematic assessment of the completeness and accuracy of the clinical information recorded in VAMC patient medical records and their potential effects on the health care quality measures VA reports, VA Central Office does not have reasonable assurance that differences in VAMCs’ performance on VA’s quality measures reflect true differences in the quality of care and not differences in the accuracy and completeness of the underlying clinical information. This may hinder VA Central Office’s ability to appropriately assess VAMC performance and offer accurate publicly available information on its website to veterans so that they can make informed choices about their care. VA uses Hospital Compare and its website to provide veterans with information on how VAMCs perform on a range of health care quality measures. By providing information on the quality of care at VA facilities, the quality measures are intended to help veterans make informed decisions about their care. However, two key limitations in VA’s efforts may hinder veterans’ ability to use the information VA provides to make informed decisions about their health care. First, while VA’s primary Access and Quality webpage provides generally accessible and understandable information on the quality of VA health care, the breadth of information it provides is too limited for veterans to make informed choice about their care. VA intends for this webpage to be veterans’ primary source of information on the quality of care at VAMCs, but the 15 measures reported on the webpage represent only a small subset of the 110 measures and only a few of the measure categories VA makes available elsewhere on its website. Until VA can provide information on a broader range of health care measures and services, highlight key differences in the quality of clinical care at VAMCs, and present this information in a way that is easily accessible and understandable, VA cannot ensure that its website is functioning as intended in helping veterans make informed choices about their care. Second, VA does not have reasonable assurance that the health care quality measures it reports on Hospital Compare and its own website accurately assess the relative performance of VAMCs. This is because the quality measures are calculated using clinical information recorded in patient medical records, but VA Central Office does not know to what extent this information has been accurately and completely recorded across VAMCs. This is because VA has not conducted a systematic assessment and does not have reasonable assurance about the accuracy of the information that its quality measures are based on. Therefore, VA lacks reasonable assurance that the quality measures reported on Hospital Compare and its own website provide accurate information to veterans so they can make informed choices about their care. We are making two recommendations to the Department of Veterans Affairs. We recommend that the Undersecretary for Health take additional steps to ensure that VA’s website reports health care quality measures that cover a broad range of health care services, highlights key differences in the clinical quality of care, and presents this information in an easily accessible and understandable way (Recommendation 1). We recommend that the Undersecretary for Health direct VA Central Office to conduct a systematic assessment of the completeness and accuracy of patient clinical information across VAMCs that is used to calculate the health care quality measures VA reports and address any deficiencies that affect the accuracy of these measures (Recommendation 2). We provided a draft of this report to VA and HHS for review and comment. HHS had no comments on this report. VA provided written comments, which we have reprinted in appendix IV, and provided technical comments, which we have incorporated as appropriate. In its comments, VA concurred with our first recommendation to take additional steps to ensure that VA’s website reports health care quality measures that cover a broad range of health care services, highlights key differences in the clinical quality of care, and presents this information in an easily accessible and understandable way. VA provided additional information about its plans to expand the quality measures reported on its Access and Quality webpage, including information on which additional measures it was planning to add to the webpage and the steps it planned to take to continue to enhance the presentation of this information for veterans. VA indicated that its target completion date for these activities was December 2017. In addition, VA concurred in principle with our second recommendation for VA Central Office to conduct a systematic assessment of the completeness and accuracy of patient clinical information across VAMCs that is used to calculate the health care quality measures VA reports and address any deficiencies that affect the accuracy of these measures. In its comments, VA acknowledged the importance of improving the reliability of data used to calculate its health care quality measures and described its plans to establish a workgroup under the Deputy Under Secretary for Health for Organizational Excellence to determine the best approach for conducting a systematic assessment of the completeness and accuracy of the patient clinical information across VAMCs. In its response, VA described the workgroup’s focus on determining whether current validation processes ensure completeness and accuracy and stated that this determination would be completed by December 2017. In conjunction with reviewing these validation processes, VA should also assess the completeness and accuracy of clinical information recorded in patient records that is used to calculate the quality measures. This will allow VA to determine the extent to which the quality measures it reports accurately reflect VAMCs’ performance in delivering care to veterans. We are sending copies of this report to the appropriate congressional committees, the Secretary of Veterans Affairs, the Secretary of Health and Human Services, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This appendix presents the specific health care quality measures that the Department of Veterans Affairs (VA) publicly reports, plans to report, or does not report on the Department of Human and Health Services’ (HHS) Hospital Compare website. HHS’s Hospital Compare website provides information on the quality of care at non-VA hospitals and VA Medical Centers (VAMCs). This information is captured across various quality measures covering topics such as patient experience. Additionally, this appendix presents the specific health care quality measures that VA reports on its own website about the quality of care at VAMCs. Some of the specific measures reported on the Hospital Compare website are the same as those reported on VA’s website and some are entirely different. In the table below, measures listed on the same row (e.g., patient experience measures) are measures that we determined were the same, based on information we obtained from VA and HHS. Measures that are grouped together but listed on different rows may address similar quality issues (e.g., hospital readmissions) but are calculated differently. Additionally, a dash in a table cell indicates that the measure listed on that row is not reported on the webpage in question. See Table 7. The VA website locations refer to subpages of VA’s two webpages with quality measures. They are coded as: Access and Quality webpage: (1) Wait Times; (2) Experience with Access; and (3) Non-VA Hospital Comparison. Quality of Care webpage: (1) VA Quality Scores (Strategic Analytics for Improvement and Learning (SAIL)); (2) MCP--Medical Center Performance; (3) WNTB--Why Not the Best; and (4) PT EXP--Patient Experience on the Quality of Care. The 13 criteria we used to assess the presentation and relevance of information on quality of care reported on VA’s primary Access and Quality and older Quality of Care webpages were identified as part of GAO’s prior work on health care transparency. These criteria identify the key characteristics that make websites effective in communicating information on health care quality of care to consumers. We used a four- scale rating system—yes, no, limited or very limited—to determine if each criterion had been met. A “limited” rating indicates that the webpage has discrete areas where the webpage has implemented the characteristic to some degree, but those areas are not representative of the webpage as a whole. A rating of “very limited” indicates that the webpage has largely not implemented the characteristic (with a few exceptions). Two analysts applied the criteria separately and reconciled any differences. Two external experts we also consulted generally agreed with our assessments of the presentation and relevance of VA’s publicly reported quality measures on its website. Six of the 13 characteristics of effective consumer websites focus on the extent to which a website presents its information in a way that enables the consumer to grasp and interpret it. Specifically, the research we reviewed shows that more effective websites: 1. Use plain language with clear graphics. Effective consumer websites use labels and descriptions that make sense to consumers who typically are unfamiliar with clinical terminology and who often have difficulty interpreting numerical information. Graphics, including symbols, can help to readily convey information on relative provider performance, especially when they are designed to display a summary assessment of that performance as part of the symbol itself, for example one that incorporates the words “superior” or “poor.” 2. Explain purpose and value of quality performance ratings to consumers. Effective consumer websites address prevalent misleading preconceptions by providing consumers coherent explanations of how different quality measures relate to the aspects of quality that consumers find relevant. These explanations work best when they link individual measures to overarching categories indicating what is being achieved, such as effectiveness of care, safety, or patient-focused care. 3. Summarize related information and organize data to highlight patterns and facilitate consumer interpretation. Two techniques that consumer websites can use to help consumers make sense of large amounts of information are (a) combining information from multiple related measures into summary or composite scores, and (b) structuring presentation of the data in ways that make patterns evident. For example, listing providers in rank order on selected cost and quality dimensions greatly simplifies identification of high and low performers. 4. Enable consumers to customize information selected for presentation to focus on what is most relevant to them. Consumers differ in the priority they assign to different aspects of quality. Websites that enable consumers to customize which quality information is presented help consumers filter out information of lesser consequence to them, and hone in on the information that they find most compelling. For example, one consumer may choose to focus on providers’ capacity to communicate well with patients, while another may focus on providers’ rates of complications and infections. 5. Enable consumers to compare quality performance of multiple providers in one view. Websites are most effective when they present side-by-side assessments of providers’ performance on a given aspect of cost or quality, so consumers can most easily compare providers. 6. Enable easy use and navigation of the tool. Unless consumers can quickly find information of interest to them, they are likely to quickly dismiss the potential utility of a consumer website and move on. Extensive testing with consumers can help public and private entities providing websites to develop intuitive, user-friendly approaches to website navigation and for manipulating how the data are presented. Seven of the 13 characteristics of effective websites we identified address the extent to which a website provides substantive quality and cost information of relevance to consumers. Specifically, the research we reviewed shows that more effective websites: 1. Review a broad range of services so that more consumers’ particular needs are included. The more services that are covered by the website, the more likely it is that the website will have information relevant to the particular services of interest to any given consumer. It is especially important to include services that are predictable and non-urgent, because these services are most likely to afford consumers the opportunity to evaluate cost and quality information before receiving the service. 2. Cover a broad range of providers. Websites that provide information for all or most of the available providers in a given geographic area, regardless of network status or practice setting, give consumers more information about their full range of options. For example, for procedures that can be conducted in either a hospital outpatient department or ambulatory surgical center, it helps consumers to provide comparable information for both settings, so that consumers can choose from a larger number of providers that offer those procedures. 3. Describe key differences in clinical quality of care, particularly patient-reported outcomes. Assessments of the clinical quality of care that have been shown to have particular relevance to consumers are those that relate to long-term outcomes of the care experienced by other patients. Often this is best addressed by patient-reported outcomes, which tell consumers the eventual outcome of treatments, as reported by previous patients of a particular provider. For example, patients receiving hip replacements can be asked, through such patient-reported outcomes, to rate their ability to climb stairs both before and after their procedures, which enables assessments of the procedures’ effects on patients’ mobility. 4. Describe key differences in patient experiences with providers. Another outcome that matters is patients’ assessment of their interactions with providers. Effective websites include information on how past patients have evaluated providers in terms of dimensions such as how well nurses communicate with patients, or the responsiveness of clinicians to patients’ needs. 5. Describe other information related to quality, where appropriate. There may be other quality indicators that could have major significance to consumers for certain types of services. For example, facility inspection results and staffing levels are of particular relevance to nursing home care. 6. Provide timely Information. More recent data are intrinsically more relevant than data that are several years old. Because consumer websites necessarily rely on past data to assess likely cost and quality performance in the future, some lag in collecting, analyzing, and providing data is inevitable. Data that are no more than two years old are generally considered timely. 7. Describe key strengths and limitations of the data. Although the research we reviewed shows that few consumers are inclined to delve into the many methodological issues that concern appropriate techniques for collecting, checking, and analyzing cost and quality data, effective websites can provide both summary assessments of strengths and limitations for most consumers, and links to more complete explanations for those wanting to pursue these issues in greater detail. Such information, along with identification of the organization responsible for the website, provides consumers a basis to judge the credibility of the cost and quality information provided. In recent years, a number of researchers have examined methods for assessing the quality of clinical information recorded in electronic medical records, including data completeness and accuracy. A wide variety of approaches have been proposed and used, depending on the clinical focus of the research and the institutional context in which the clinical information is recorded. The following provides some examples drawn from this body of research. 1. Nicole Gray Weiskopf and Chunhua Weng, “Methods and dimensions of electronic health record data quality assessment: enabling reuse for clinical research,” Journal of the American Medical Informatics Association, vol. 20, no. 1 (2013), 144–151. This article identifies a range of different tests that had been applied by various researchers to determine both data completeness and data accuracy in electronic patient records. Most of these tests involved comparison with another data source, such as paper records, patient interviews, or other alternative data sources, that served as a “gold standard” for purposes of assessing the information recorded in the electronic medical record. Other tests involved comparison between different pieces of information recorded in the electronic medical record. 2. Philip J.B. Brown and Victoria Warmington, “Data quality probes— exploiting and improving the quality of electronic patient record data and patient care,” International Journal of Medical Informatics, vol. 68 (2002), 91-98. The article describes the use of “data quality probes” to assess data quality by matching the result of database queries relative to established clinical knowledge for a given condition. 3. Edwin R Faulconer and Simon de Lusignan, “An eight-step method for assessing diagnostic data quality in practice: chronic obstructive pulmonary disease as an exemplar,” Informatics in Primary Care, vol. 12, no. 4, (2004), 243–53. This article describes an eight-step methodology for assessing the completeness and accuracy of diagnoses for a specific condition. 4. Michael G. Kahn, Marsha A. Raebel, Jason M. Glanz, Karen Riedlinger, and John F. Steiner, “A Pragmatic Framework for Single- site and Multisite Data Quality Assessment in Electronic Health Record-based Clinical Research,” Medical Care, vol. 50, no. 7 (2012), S21-S29. This article proposes an approach that prioritizes assessment of selected variables and data quality dimensions; iterative cycles of assessment within and between sites; targeting assessment toward data domains known to be vulnerable to quality problems; and detailed documentation of the rationale and outcomes of data quality assessments to inform data users. It presents a comprehensive set of data quality rules that look for anomalies in data values and distributions as well as inconsistencies across related variables. 5. William R. Hogan and Michael M. Wagner, “Accuracy of Data in Computer-based Patient Records,” Journal of the American Medical Informatics Association, vol. 4, no. 5 (1997), 342–355. This article emphasizes the importance of assessing data completeness and accuracy in conjunction with each other. It recommends adopting a continuous improvement approach to improving data quality based on regular cycles of monitoring, analysis of errors, and interventions to address the factors found to cause errors. In addition to the contact named above, Rashmi Agarwal, Assistant Director; Eric Peterson, Analyst in Charge; Dee Abasute; Krister Friday; Jacquelyn Hamilton; Wati Kadzai, and Vikki Porter made key contributions to this report. VA Health Care: Improvements Needed in Data and Monitoring of Clinical Productivity and Efficiency. GAO-17-480 (Washington, D.C.: May 24, 2017). Federal Health Care Center: VA and DOD Need to Develop Better Information to Monitor Operations and Improve Efficiency. GAO-17-197 (Washington, D.C.: Jan. 23, 2017). Veterans Health Care: Improvements Needed in Operationalizing Strategic Goals and Objectives.GAO-17-50 (Washington, D.C.: Oct. 21, 2016). VA Primary Care: Improved Oversight Needed to Better Ensure Timely Access and Efficient Delivery of Care. GAO-16-83 (Washington, D.C.: Oct. 8, 2015). Health Care Transparency: Actions Needed to Improve Cost and Quality Information for Consumers. GAO-15-11 (Washington, D.C.: Oct. 20, 2014). VA Health Care Management and Oversight of Consult Process Need Improvement to Help Ensure Veterans Receive Timely Outpatient Specialty Care. GAO-14-808 (Washington, D.C.: Sep. 30, 2014).
To help veterans make informed choices about their care, the Veterans Access, Choice, and Accountability Act of 2014 (Choice Act) directs VA to publicly report applicable health care quality measures for its medical facilities on HHS's Hospital Compare website and on VA's own website. The Choice Act also contains provisions for GAO to review the health care quality measures VA publicly reports. In this report, GAO 1) describes the quality measures VA reports on Hospital Compare and its own website; 2) evaluates VA's reporting of quality measures on its website; and 3) examines the extent to which VA has assessed the accuracy of the quality measures it publicly reports. GAO reviewed the quality measures VA publicly reports, reviewed studies and interviewed VA officials about the accuracy and completeness of the clinical information used to calculate the measures, and assessed the presentation and relevance of VA's information on quality of care using criteria identified in previous GAO work to evaluate health care websites. As of June 2017, the Department of Veterans Affairs (VA) publicly reported 35 health care quality measures on the Hospital Compare website, which is maintained by the Department of Health and Human Services. Veterans can use information on this website to compare the performance of VA medical centers (VAMC) and non-VA hospitals on a common set of quality measures. Those measures include patient reports of their experience of care, such as how well doctors and nurses communicated with them, and actual outcomes of care, such as readmissions to the hospital. On its own website, VA reported 110 quality measures, including some of the same measures reported on Hospital Compare . VA also reports quality measures not found on Hospital Compare , such as measures of how long veterans must wait to access care at VAMCs. VA reports health care quality measures on two separate webpages of its website. VA launched the Access and Quality webpage in April 2017, which according to VA officials is the primary source of information for veterans on the quality of care at VAMCs. GAO found that this information is generally presented in a way that is accessible and easy to understand. However, GAO also found that the primary webpage provides information from a small subset—15—of the 110 measures VA reports on its website as of June 2017. Most of the other measures are available on a second, older webpage that resides elsewhere on VA's website and is generally not easily accessible and understandable. Until VA can provide information on a broader range of health care measures and services and present this information in a way that is easily accessible and understandable, VA cannot ensure that its website is functioning as intended in helping veterans make informed choices about their care. Within VA, VA Central Office is responsible for calculating the health care quality measures that VA publicly reports for each of its VAMCs and ensuring that these measures provide accurate information on the VAMCs' quality of care. However, GAO found that VA Central Office has not systematically assessed the completeness and accuracy of the underlying clinical information that is used to calculate these measures. This clinical information is recorded in veterans' medical records and includes diagnoses given and treatments provided. Several studies have found potential problems with the accuracy and completeness of this clinical information at some VAMCs. For example, a 2015 independent assessment conducted by McKinsey & Company found that VA's clinical documentation procedures are below industry standards and that many VAMCs do not have programs in place to improve clinical documentation practices. VA Central Office officials told GAO that they have not systematically assessed the completeness and accuracy of the clinical information across VAMCs and the extent to which this affects the accuracy of its quality measures because they have focused on other priorities. However, the lack of such an assessment is inconsistent with federal standards for internal controls related to information and monitoring. As a result, VA does not have assurance that the quality measures it publicly reports on Hospital Compare and its own website accurately reflect the performance of its VAMCs and provide veterans with the information they need to make informed choices about their care. GAO recommends that VA 1) report a broader range of health care quality measures in an accessible and understandable way on its website and 2) conduct a systematic assessment of the patient clinical information across VAMCs to ensure its accuracy and completeness. VA concurred with GAO's first recommendation and concurred in principle with the second recommendation, and described steps to implement the recommendations.
DHS’s mission is to lead the unified national effort to secure America by preventing and deterring terrorist attacks and protecting against and responding to threats and hazards to the nation, among other things. Created in 2002, DHS merged 22 agencies and offices that specialized in one or more aspects of homeland security. The intent behind the merger that created DHS was to improve coordination, communication, and information sharing among these multiple federal agencies. Each of these agencies is responsible for specific homeland security missions and for coordinating related efforts with its sibling components, as well as external entities. Figure 1 shows a simplified and partial DHS organizational structure. Within the department’s Management Directorate, headed by the Under Secretary for Management (USM), are the OCHCO and OCIO. The OCHCO is responsible for department-wide human capital policy and development, planning, and implementation of human capital initiatives. The OCIO is responsible for departmental IT policies, processes, and standards, and ensuring that IT acquisitions comply with DHS IT management processes, among other things. DHS acquires IT and other capabilities that are intended to improve its ability to execute its mission. DHS classifies these acquisition programs into three levels that determine the extent and scope of required project and program management, the level of reporting requirements, and the acquisition decision authority. Specifically, DHS policy defines acquisition programs as follows: Level 1 major acquisition programs are expected to cost $1 billion or more over their life cycles. Level 2 major acquisition programs are expected to cost at least $300 million over their life cycles. Special interest programs, without regard to the established dollar thresholds, are designated as Level 1 or Level 2 programs. For example, a program may be raised to a higher acquisition level if its importance to DHS’s strategic and performance plans is disproportionate to its size or it has high executive visibility. Level 3 programs are those with life-cycle cost estimates less than $300 million and are considered non-major. As outlined in DHS’s Acquisition Management Directive 102-01, DHS’s Chief Acquisition Officer—the USM—is responsible for the management and oversight of the department’s acquisition policies and procedures. The Deputy Secretary, USM, and Component Acquisition Executives are the acquisition decision authorities for DHS’s acquisition programs. For Level 1 programs, the acquisition decision authority may be either the Deputy Secretary or USM; for level 2 programs, the acquisition decision authority may be either the USM or a Component Acquisition Executive; and for Level 3 programs, a Component Acquisition Executive is the acquisition decision authority. As of March 2015, the department had 72 major acquisition programs and 42 non-major acquisition programs. In 2003, we designated the transformation of DHS as high risk because it had to transform 22 agencies—several with major management challenges—into one department. We emphasized that failure to effectively address DHS’s management and mission risks could have serious consequences for U.S. national and economic security. In 2007 and 2009, in reporting on DHS’s progress in addressing the high- risk area since its creation, we found that DHS had made more progress in implementing its range of missions than its management functions— such as in the areas of IT and human capital—and that continued work was needed to address an array of programmatic and management challenges. Since then, DHS had continued to make important progress in strengthening and integrating its management functions; however, significant work remained for DHS to improve in these areas. For example, As of September 2015, DHS had taken steps to identify current and future human capital needs, including the size of the workforce, its deployment across the department and components, and the knowledge, skills, abilities, and diversity needed; however, DHS had yet to fully implement its workforce planning model that was intended to allow the department to plan for its current and future organizational and workforce needs. In February 2015, we reported that while DHS established a human capital strategic plan in 2011 and made progress in implementing it, the department had considerable work ahead to improve employee morale, which has decreased each year since 2011. For example, the Office of Personnel Management’s 2014 Federal Employee Viewpoint Survey data showed that DHS’s scores continued to decrease in all four dimensions of the survey’s index for human capital accountability and assessment. While the department had made progress in implementing its IT Strategic Human Capital Plan for fiscal years 2010 through 2012, in January 2015 DHS shifted its IT paradigm from acquiring assets to acquiring services, and acting as a service broker (e.g., an intermediary between the purchaser of a service and the seller of that service). According to DHS officials in May 2015, this paradigm change will require a major transition in the skill sets of DHS’s IT workforce, as well as the hiring, training, and managing of those new skill sets; as such, this effort will need to be closely managed in order to succeed. Moreover, as of September 2014, DHS faced challenges in integrating employee training management across all the components, including centralizing training and consolidating training data into one system. According to DHS officials, the department planned to address these limitations through the development and deployment of HRIT’s PALMS program. Since DHS was created, the department’s human resources environment has included fragmented systems, duplicative and paper-based processes, and little uniformity of data management practices. According to DHS, these limitations in its human resources environment are compromising the department’s ability to effectively and efficiently carry out its mission. For example, While it is imperative that DHS responds quickly to emergencies, catastrophic events, and threats, and deploys appropriately trained, certified, and skilled personnel during these events, according to DHS, the department’s hiring process involves numerous systems and multiple hand-offs which result in extra work and prolonged hiring. This inefficient process is one factor that could have contributed to the skill and workforce gaps that we have previously identified. For example, in April 2015, we reported that 21 of the 22 major acquisition programs we reviewed faced shortfalls in their program office workforce in fiscal year 2014. According to DHS, the department does not have information on all of its employees, which reduces its abilities to strategically manage its workforce and best deploy people in support of Homeland Security missions. According to DHS, reporting and analyzing enterprise human capital data are currently time-consuming, labor-intensive, and challenging because the department’s data management largely consists of disconnected, standalone systems, with multiple data sources for the same content. As one example, we reported in 2014 that DHS could not provide complete information on how much it had spent on administratively uncontrollable overtime to its personnel from fiscal years 2008 through 2014. Specifically, certain components could not provide information such as duty location or payments for certain years. To address these issues, in 2003, DHS initiated the HRIT investment, which is intended to consolidate, integrate, and modernize the department’s and its components’ human resources IT infrastructure. These components include U.S. Customs and Border Protection (CBP), the Federal Emergency Management Agency (FEMA), the Federal Law Enforcement Training Center (FLETC), U.S. Immigration and Customs Enforcement (ICE), the Transportation Security Administration (TSA), U.S. Citizenship and Immigration Services (USCIS), the U.S. Coast Guard (USCG), and the U.S. Secret Service. HRIT is managed by DHS’s Human Capital Business Systems unit, which is within OCHCO and has overall responsibility for HRIT. Additionally, OCIO plays a key supporting role in the implementation of HRIT by reviewing headquarters’ and components’ human resources investments, identifying redundancies and efficiencies, and delivering and maintaining enterprise IT systems. From 2003 to 2010, DHS made limited progress on the HRIT investment, as reported by DHS’s Inspector General. This was due to, among other things, limited coordination with and commitment from DHS’s components. To address this problem, in 2010 the DHS Deputy Secretary issued a memorandum emphasizing that DHS’s wide variety of human resources processes and IT systems inhibited the ability to unify DHS and negatively impacted operating costs. The memorandum stated that, without an enterprise operating model, support for DHS’s core mission was at risk and valuable workforce management information remained difficult to acquire across the department. Accordingly, the Deputy Secretary stated that DHS could no longer sustain a component-centric approach when acquiring or enhancing human resources systems, and prohibited component spending on enhancements to existing human resources systems or acquisitions of new solutions, unless those expenditures were approved by OCHCO or OCIO. The memorandum also directed these offices to develop a department-wide human resources architecture. In 2011, in response to the Deputy Secretary’s direction, DHS completed an effort called the Human Capital Segment Architecture, which, according to DHS, defined the department’s current (or as-is) state of human capital management processes, technology, data, and relevant personnel. Further, from this current state, the department developed a comprehensive future state (or target state) and a document referred to as the Human Capital Segment Architecture blueprint that redefined the HRIT investment’s scope and implementation time frames. As part of this effort, DHS conducted a system inventory and determined that it had 422 human resources systems and applications, many of which were single- use solutions developed to respond to a small need or links to enable disparate systems to work together. DHS reported that these numerous, antiquated, and fragmented systems inhibited its ability to perform basic workforce management functions necessary to support mission critical programs. To address this issue, the blueprint articulated that HRIT would be comprised of 15 strategic improvement opportunity areas (e.g., enabling seamless, efficient, and transparent end-to-end hiring) and outlined 77 associated projects (e.g., deploying a department-wide hiring system, establishing an integrated data repository and reporting mechanism, and developing a centralized learning center for all personnel action processing information) to implement these 15 opportunities. Each opportunity area includes from 1 to 10 associated projects. Table 1 summarizes the scope of the 15 strategic improvement opportunities— listed in the order of DHS’s assigned priority—and identifies their original planned completion dates, as of August 2011 when the blueprint was issued. HRIT’s only ongoing program is called PALMS and is intended to fully address the Performance Management strategic improvement opportunity area and its three associated projects. PALMS is attempting to implement a commercial off-the-shelf software product that is to be provided as a service in order to enable, among other things, comprehensive enterprise-wide tracking, reporting, and analysis of employee learning and performance for DHS headquarters and its eight components. Specifically, PALMS is expected to deliver the following capabilities: Learning management. The learning management capabilities are intended to manage the life cycle of learning activities for all DHS employees and contractors. PALMS is intended to, among other things, act as a gateway for accessing training at DHS and record training information when a user has completed a course. Additionally, it is expected to replace nine disparate learning management systems with one unified system. Performance management. The performance management capabilities are intended to move DHS’s existing primarily paper- based performance management processes into an electronic environment and capture performance-related information throughout the performance cycle (e.g., recording performance expectations discussed at the beginning of the rating period and performance ratings at the end of it). Each component is responsible for its own PALMS implementation project, and is expected to issue a task order using a blanket purchase agreement that was established in May 2013 with an estimated value of $95 million. Before implementing PALMS, each component is completing a fit-gap assessment to, among other things, identify any requirements and critical processes that cannot be met by the preconfigured, commercial off-the-shelf system. If such component- specific requirements are identified, the component must then decide whether to have the vendor customize the system. The headquarters PALMS program management office (PMO) is responsible for overseeing the implementation projects across the department. Additionally, OCIO is the Component Acquisition Executive responsible for overseeing PALMS. In addition to implementing projects intended to address the strategic improvement opportunities in the blueprint, the HRIT investment also carried out the following two projects that were not included in the blueprint: Balanced Workforce Assessment Tool: This project provided an enterprise-wide tool to automate the formerly paper-based balanced workforce strategy process to determine the appropriate mix of federal employees and contractor employees required to fulfill a specific work function in the government. DHS deployed this tool beginning in September 2013. Workers Compensation – Medical Case Management Services: This project provided an enterprise-wide contract to enable nurses to execute case management processes and facilitate the case management activities to be performed by DHS human resources staff. As part of this, the project provided access to a web application where DHS workers’ compensation coordinators could work on cases with nurses. As of March 2015, the tool had been implemented at six components. Entities such as the Project Management Institute, the Software Engineering Institute at Carnegie Mellon University, and GAO have developed and identified best practices to help guide organizations to effectively plan and manage their acquisitions of major IT systems. Our prior reviews have shown that proper implementation of such practices can significantly increase the likelihood of delivering promised system capabilities on time and within budget. These practices include, but are not limited to: Project planning: Establishes project objectives and outlines the course of action required to attain those objectives. It also provides a means to track, review, and report progress and performance of the project by defining project activities and developing cost and schedule estimates, among other things. Project monitoring and control: Provides an understanding of the project’s progress, so that appropriate corrective actions can be taken if performance deviates from plans. Effective practices in this area include, among other things, determining progress against the program plan and conducting program management reviews. Risk management: Establishes a process for anticipating problems and taking appropriate steps to mitigate risks and minimize their impact on program commitments. It involves identifying and documenting risks, categorizing them based on their estimated impact, prioritizing them, developing risk mitigation strategies, and tracking progress in executing the strategies. DHS has made very little progress in delivering planned HRIT capabilities, such as end-to-end hiring and payroll action processing. While the vast majority of HRIT capabilities (called strategic improvement opportunities) were to be delivered by June 2015, only 1 has been fully implemented, and the completion dates for the other 14 are currently unknown. These delays are largely due to unplanned resource changes and the lack of involvement from the executive oversight committee. In addition, the department did not effectively manage the investment. For example, DHS did not update or maintain the HRIT schedule, have a life- cycle cost estimate, or track all associated costs. Moreover, the strategic planning document—referred to as the Human Capital Segment Architecture Blueprint—has not been updated in approximately 4.5 years and, as a result, the department does not know whether it is reflective of current priorities and goals. As a result of DHS’s ineffective management and limited progress in implementing this investment, the department is unaware of when critical weaknesses in the department’s human capital environment will be addressed, which is, among other things, impacting DHS’s ability to reduce duplication and carry out its mission. DHS has made very limited progress in addressing the 15 strategic improvement opportunities and the 77 associated projects included in HRIT. According to the Human Capital Segment Architecture Blueprint, DHS planned to implement 14 of the 15 strategic improvement opportunities and 68 of the 77 associated projects by June 2015; and the remaining improvement opportunity and 9 associated projects by December 2016. However, as of November 2015, DHS had fully implemented only 1 of the strategic improvement opportunities, which included 2 associated projects. This improvement opportunity established an enterprise-wide governance process for evaluating HRIT projects and proposals prior to funding them. This process is referred to as the investment intake process and is intended to help encourage the use of enterprise-level investments, rather than component-specific investments, by preventing components from investing in duplicative systems when an existing DHS capability can meet a particular business need. Table 2 summarizes the implementation status and planned completion dates of the strategic improvement opportunities—listed in the order of DHS’s assigned priority—as of November 2015. DHS has partially implemented five of the other strategic improvement opportunities, but it is unknown when they will be fully addressed. For example, DHS’s PALMS program is intended to fully address the blueprint’s strategic improvement opportunity for Performance Management; however, while progress to implement PALMS has been made, many actions remain before it can be fully implemented and it is unknown when those actions will be taken (discussed in more detail later). Further, HRIT officials stated that DHS has not yet started to work on the remaining nine improvement opportunities, and the officials did not know when they would be addressed. Additionally, DHS developed an HRIT strategic plan for fiscal years 2012 through 2016 that outlined the investment’s key goals and objectives, including reducing duplication and improving efficiencies in the department’s human resources processes and systems. The strategic plan identified, among other things, two performance metrics and associated targets for delivering human resources IT services across DHS. These performance metrics were focused on reductions in the number of component-specific human resources IT services provided and increases in the number of department-wide HRIT services provided by the end of fiscal year 2016. However, DHS has also made limited progress in achieving these two performance targets. Figure 2 provides a summary of HRIT’s progress towards achieving its service delivery performance targets. DHS’s goal is to reduce its component-specific HRIT services by 46 percent—from 81 percent to 35 percent—however, it had reduced these services by 8 percent as of November 2015, according to OCHCO officials. Additionally, while DHS is aiming to increase its DHS-wide HRIT services by 38 percent—from 2 percent to 40 percent—as of November 2015, OCHCO officials stated that the department had increased these services by 8 percent. Key causes for DHS’s lack of progress in implementing HRIT and its associated strategic improvement opportunities include unplanned resource changes and the lack of involvement of the HRIT executive steering committee. These causes are discussed in detail below: Unplanned resource changes. DHS elected to dedicate the vast majority of HRIT’s resources to implementing PALMS and addressing its problems, rather than initiating additional HRIT strategic improvement opportunities. Specifically, PALMS—which began in July 2012—experienced programmatic and technical challenges that led to years-long schedule delays. For example, while the PALMS system for headquarters was originally planned to be delivered by a vendor in December 2013, as of November 2015, the expected delivery date was delayed until the end of February 2016—an over 2-year delay. HRIT officials explained the decision to focus primarily on PALMS was due, in part, to the investment’s declining funding stream. However, in doing so, attention was concentrated on the immediate issues affecting PALMS and diverted from the longer-term HRIT mission. Lack of involvement of the HRIT executive steering committee. The HRIT executive steering committee—which is chaired by the department’s Under Secretary for Management and co-chaired by the Chief Information Officer and Chief Human Capital Officer—is intended to be the core oversight and advisory body for all DHS-wide matters related to human capital IT investments, expenditures, projects, and initiatives. In addition, according to the committee’s charter, the committee is to approve and provide guidance on the department’s mission, vision, and strategies for the HRIT program. However, the executive steering committee only met once from September 2013 through June 2015—in July 2014—and was minimally involved with HRIT for that almost 2 year period. It is important to note that DHS replaced its Chief Information Officer (the executive steering committee’s co-chair) in December 2013—during this gap in oversight. Also during this time period HRIT’s only ongoing program—PALMS—was experiencing significant problems, including schedule slippages and frequent turnover in its program manager position (i.e., PALMS had five different program managers during the time that the HRIT executive steering committee was minimally involved). As a result of the executive steering committee not meeting, key governance activities were not completed on HRIT. For example, the committee did not approve HRIT’s notional operational plan for fiscal years 2014 through 2019. OCHCO and OCIO officials attributed the lack of HRIT executive steering committee meetings and committee involvement in HRIT to the investment’s focus being only on the PALMS program to address its issues, as discussed earlier. However, by not regularly meeting and providing oversight during a time when a new co-chair for the executive steering committee assumed responsibility and PALMS was experiencing such problems, the committee’s guidance to the troubled program was limited. More recently, the HRIT executive steering committee met in June and October 2015, and OCIO and OCHCO officials stated that the committee planned to meet quarterly going forward. However, while the committee’s charter specified that it meet on at least a monthly basis for the first year, the charter does not specify the frequency of meetings following that year. Furthermore, the committee’s charter has not been updated to reflect the increased frequency of these meetings. As a result of the limited progress in implementing HRIT, DHS is unaware of when critical weaknesses in the department’s human capital environment will be addressed, which is, among other things, impacting DHS’s ability to carry out its mission. For example, the end-to-end hiring strategic improvement opportunity (which has an unknown implementation date) was intended to streamline numerous systems and multiple hand-offs in order to more efficiently and effectively hire appropriately skilled personnel, thus enabling a quicker response to emergencies, catastrophic events, and threats. As another example, the data management and sharing strategic improvement opportunity (which also has an unknown implementation date) was intended to enable the department to have visibility of all its employees, to improve its ability to strategically manage its workforce, and best deploy people in support of DHS missions. Therefore, until HRIT’s executive steering committee effectively carries out its oversight responsibility, DHS will be limited in its ability to improve HRIT investment results and accountability. According to the GAO Schedule Assessment Guide, a key activity in effectively managing a program and ensuring progress is establishing and maintaining a schedule estimate. Specifically, a well maintained schedule enables programs to gauge progress, identify and resolve potential problems, and forecast dates for program activities and completion of the program. In August 2011, DHS established initiation and completion dates for each of the 15 strategic improvement opportunities within the Human Capital Segment Architecture Blueprint. Additionally, HRIT developed a slightly more detailed schedule for fiscal years 2014 through 2021 that updated planned completion dates for aspects of some strategic improvement opportunities, but not all. However, DHS did not update and maintain either schedule after they were developed. Specifically, neither schedule was updated to reflect that DHS did not implement 13 of the 15 improvement opportunities by their planned completion dates—several of which should have been implemented over 3 years ago. HRIT officials attributed the lack of schedule updates to the investment’s focus shifting to the PALMS program when it started experiencing significant schedule delays. Without developing and maintaining a current schedule showing when DHS plans to implement the strategic improvement opportunities, DHS and Congress will be limited in their ability to oversee and ensure DHS’s progress in implementing HRIT. OMB requires that agencies prepare total estimated life-cycle costs for information technology investments. Program management best practices also stress that key activities in planning and managing a program include establishing a life-cycle cost estimate and tracking costs expended. A life-cycle cost estimate supports budgetary decisions and key decision points, and should include all costs for planning, procurement, and operations and maintenance of a program. OCHCO officials stated that a draft life-cycle cost estimate for HRIT was developed, but that it was not completed or finalized because detailed projects plans for the associated projects had not been developed or approved. According to the HRIT blueprint, OCHCO roughly estimated that implementing all of the projects could cost up to $120 million. However, the blueprint specifies that this figure did not represent the life- cycle cost estimate; rather it was intended to be a preliminary estimate to initiate projects. Without a life-cycle cost estimate, DHS has limited information about how much it will cost to implement HRIT, which hinders the department’s ability to, among other things, make budgetary decisions and informed milestone review decisions. According to CMMI-ACQ and the PMBOK® Guide, programs should track program costs in order to effectively manage the program and make resource adjustments accordingly. In particular, tracking and monitoring costs enables a program to recognize variances from the plan in order to take corrective action and minimize risk. However, DHS has not tracked the total actual costs incurred on implementing HRIT across the enterprise to date. Specifically, while the investment received line item appropriations for fiscal years 2005 through 2015 which totaled at least $180 million, DHS was unable to provide all cost information on HRIT activities since it began in 2003, including all government-related activities and component costs that were financed through the working capital fund, which, according to DHS officials from multiple offices, were provided separately from the at least $180 million appropriated specifically to HRIT. OCHCO officials attributed the lack of cost tracking to, among other things, the investment’s early reliance on contractors to track costs, and said that the costs were not well maintained nor centrally tracked, and included incomplete component- provided cost information. The components were also unable to provide us with complete information. For example, FEMA officials stated that it would require a significant administrative effort to identify how much it has spent on HRIT since inception in 2003 because of the way their financial system obligates and expends funds for Working Capital Fund activities. USCG officials also said that compiling its expenditure information for fiscal years 2003-2009 would require a substantial administrative effort, including reviewing a significant number of paper files. USCIS was unable to identify its HRIT-related expenditures for fiscal years 2003-2010. Without tracking all costs associated with HRIT, including components’ costs, stakeholders are limited in making informed resource decisions, and DHS cannot provide complete and accurate information to assist congressional oversight. According to the HRIT executive steering committee’s charter, the Under Secretary for Management (as the chair of the committee) is to ensure that the department’s human resources IT business needs are met, as outlined in the blueprint. Additionally, according to the GPRA (Government Performance and Results Act) Modernization Act of 2010, agency strategic plans should be updated at least every 4 years. While this is a legal requirement for agency strategic plans (the Human Capital Segment Architecture blueprint does not fall under the category of an “agency strategic plan”), it is considered a best practice for other strategic planning documents, such as the blueprint. However, the department issued the blueprint in August 2011 (approximately 4.5 years ago) and has not updated it since. As a result, the department does not know whether the remaining 14 strategic improvement opportunities and associated projects that it has not fully implemented are still valid and reflective of DHS’s current priorities, and are appropriately prioritized based on current mission and business needs. Additionally, DHS does not know whether new or emerging opportunities or business needs need to be addressed. Officials stated that the department is still committed to implementing the blueprint, but agreed that it should be re-evaluated. To this end, following a meeting we had with DHS’s Under Secretary for Management in October 2015, in which we expressed concern about HRIT’s lack of progress, OCHCO and OCIO officials stated that HRIT was recently asked by the Deputy Under Secretary of Management in late October 2015 to re-evaluate the blueprint’s strategic improvement opportunities and to determine the way forward for those improvement opportunities and the HRIT investment. However, officials did not know when this re- evaluation and a determination for how to move forward with HRIT would occur, or be completed. Further, according to OCIO officials, DHS has not updated its complete systems inventory since it was originally developed as part of the blueprint effort, in response to a 2010 Office of Inspector General report that stated that DHS had not identified all human resource systems at the components. This report also emphasized that without an accurate inventory of human resource systems, DHS cannot determine whether components are using redundant systems. Moreover, OCIO officials were unable to identify whether and how its inventory of human resources systems had changed. Until DHS establishes time frames for re-evaluating the blueprint to reflect DHS’s HRIT current priorities and updates its human resources system inventory, the department will be limited in addressing the inefficient human resources environment that has plagued the department since it was first created. DHS took several steps to justify its investment in the PALMS program for both of the program’s two main purposes (the learning management capabilities and the performance management capabilities) through multiple mechanisms. Specifically, although existing DHS guidance did not require an analysis of alternatives for PALMS because it is a Level 3 acquisition program, the department initiated such an analysis in 2010 to identify recommended approaches for pursuing a commercial off-the- shelf learning management system to replace the components’ nine existing learning systems. According to the analysis of alternatives, the nine systems at the department were disconnected from each other and did not exchange information. The components had independently purchased these learning management systems and, in some cases, had done so before DHS was established in 2002. However, DHS determined that a unified strategy for learning management systems at the department was needed, rather than disparate, component-centric efforts. In particular, DHS determined that such a strategy was necessary to provide, among other things, improved reporting, greater automation, less duplication and redundancy of training courses, better governance, and streamlined IT infrastructure. The analysis of alternatives, which was performed by the Homeland Security Studies and Analysis Institute, included, among other things, an assessment of six alternative approaches, including status quo, implementation of two systems from separate vendors (allowing components to choose which system to use), and implementation of a single system (either centrally managed by DHS or individually managed by each component). As part of the analysis, the Institute assessed the alternative approaches based on five evaluative categories, including cost, benefits, and risks. Based on the analysis of alternatives process, the institute recommended that DHS adopt a single enterprise-wide, centrally managed learning management system as the most cost- effective approach to providing such a capability to the department. Regarding the second purpose of PALMS—enabling performance management capabilities—the August 2011 Human Capital Segment Architecture Blueprint called on DHS to conduct an analysis of alternatives to identify the preferred approach for such a solution. Officials stated that DHS leadership ultimately determined that such an analysis for a performance management solution was unnecessary because the requirement for DHS to automate performance management functions across the department was the same as it was during DHS’s prior attempt to pursue an automated performance management system for instituting pay-for-performance—an effort that was ultimately abandoned. Therefore, instead of conducting an analysis of alternatives on performance management system approaches for DHS enterprise-wide adoption, in January 2012, departmental leadership made an executive decision on the approach based on the findings of a December 2011 request for information from industry. In particular, the accumulated industry information highlighted that vendors for an enterprise-wide learning management solution could in most cases also provide a system that integrated performance management capabilities. This industry information validated DHS officials’ understanding that a combined solution for learning and performance management at the department was consistent with prevailing industry offerings. According to OCHCO officials, the department’s request for information from industry to help justify its preferred approach allowed for competition within industry for supplying a solution to the department. As part of the department’s considerations, officials had determined that this competition could better help to reduce overall implementation costs for a consolidated learning and performance management system, versus adopting, without competition, one of the components’ existing learning or performance management systems for DHS enterprise-wide deployment. Additionally, OCHCO officials stated that they contacted other federal departments to determine whether existing shared services could be used by DHS to establish an integrated system for learning and performance management, but DHS determined that other departments’ contracts with service providers could not be modified to allow DHS to use the same services. Based on the collective results of the learning management system analysis of alternatives and the request for information from industry on performance management systems, the HRIT executive steering committee exercised its executive decision-making authority and decided that an integrated, enterprise-wide learning and performance management system should be pursued for adoption at the department. DHS’s integrated solution is now being implemented by the PALMS program. By providing the executive steering committee with enough information for determining this preferred approach for the department, DHS justified its investment in the PALMS program. As previously mentioned, PALMS is intended to provide an enterprise- wide system that offers performance management capabilities, as well as learning management capabilities to headquarters and each of its components. As such, DHS headquarters PMO and the components estimate that, if fully implemented across DHS, PALMS’s learning management capabilities would be used by approximately 309,360 users, and its performance management capabilities would be used by at least 217,758 users. Table 3 identifies the total estimated number of planned users for both PALMS’s learning management capabilities and performance management capabilities if PALMS is fully implemented department-wide. However, there is uncertainty about whether the PALMS system will be used enterprise-wide to accomplish these goals. Specifically, as of November 2015, of the eight components and headquarters, five are planning to implement both PALMS’s learning and performance management capabilities (three of which have already implemented the learning management capabilities—discussed later), two are planning to implement only the learning management capabilities, and two components are not currently planning to implement either of these PALMS capabilities, as illustrated in figure 3. Officials from FEMA, TSA, ICE, and the USCG cited various reasons for why they were not currently planning to fully implement PALMS, which include: FEMA and ICE officials stated that they were not currently planning to implement the performance management capabilities because the program had experienced critical deficiencies in meeting the performance management-related requirements. FEMA officials stated that they do not plan to make a decision on whether they will or will not implement these performance management capabilities until the vendor can demonstrate that the system meets FEMA’s needs; as such, FEMA officials were unable to specify a date for when they plan to make that decision. ICE officials also stated that they do not plan to implement the performance management capabilities of PALMS until the vendor can demonstrate that all requirements have been met. PALMS headquarters PMO officials expected all requirements to be met by the vendor by the end of February 2016. TSA officials stated that they were waiting on the results of their fit- gap assessment of PALMS before determining whether, from a cost and technical perspective, TSA could commit to implementing the learning and/or performance management capabilities of PALMS. TSA officials expected the fit-gap assessment to be completed by the end of March 2016. USCG officials stated that, based on the PALMS schedule delays experienced to date, they have little confidence that the PALMS vendor could meet the component’s unique business requirements prior to the 2018 expiration of the vendor’s blanket purchase agreement. Additionally, these officials stated that the system would not meet all of its learning management requirements for about 31,000 auxiliary volunteer members and certain other employee groups. Further, although the fit gap assessment for implementing PALMS at USCG had not been fully completed, the component’s officials stated that the system would likely not fully meet the performance management requirements for all of USCG’s military components. Due to the component’s uncertainty, the officials were unable to specify when they plan to ultimately decide on whether they will implement one or both aspects of PALMS. As a result, it is unlikely that the department will reach its expected user estimates as presented in table 3, and meet its goal of being an enterprise-wide system. Specifically, as of November 2015, the components estimate 179,360 users will use the learning management capabilities of PALMS (not the 309,360 expected, if fully implemented). Figure 4 shows the percentage of expected users from components currently planning to implement PALMS’s learning management capabilities in comparison to the total expected users if PALMS was fully implemented, as of November 2015. Additionally, as of November 2015, the components estimate 123,200 users will use the performance management capabilities of PALMS (not the 217,758 expected, if fully implemented). Figure 5 shows the percentage of expected users from components planning to implement PALMS’s performance management capabilities in comparison to the total expected user estimate if fully implemented as intended. Of the seven components and headquarters that are currently planning to implement the learning and/or performance management aspects of PALMS, three have completed their implementation efforts of the learning management capabilities and deployed these capabilities to users (deployed to CBP in July 2015, headquarters in October 2015, and FLETC in December 2015); two have initiated their implementation efforts on one or both aspects, but not completed them; and two have not yet initiated any implementation efforts, as of November 2015. As a result, PALMS’s current trajectory is putting the department at risk of not meeting its goals to perform efficient, accurate, and comprehensive tracking and reporting of training and performance management data across the enterprise; and consolidating its nine learning management systems down to one. Accordingly, until FEMA decides whether it will implement the performance management capabilities of PALMS and USCG decides whether it will implement the learning and/or performance management capabilities of PALMS, the department is at risk of implementing a solution that does not fully address its problems. Moreover, until DHS determines an alternative approach if one or both aspects of PALMS is deemed not feasible for ICE, TSA, FEMA or the USCG, the department is at risk of not meeting its goal to enable enterprise-wide tracking and reporting of employee learning and performance management. HRIT’s PALMS program varied in its implementation of IT acquisition best practices for project planning, project monitoring, and risk management. Specifically, the program management office had implemented selected IT acquisition best practices in each of these areas; however, the program had not developed complete life-cycle cost and schedule estimates. Additionally, the PALMS PMO did not monitor total costs spent on the program or consistently document the results from progress and milestone reviews. Further, the program management office had not fully implemented selected risk management practices. Without fully implementing effective acquisition management practices, DHS is limited in monitoring and overseeing the implementation of PALMS, ensuring that the department obtains a system that improves its performance management and learning management weaknesses, reduces duplication, and delivers within cost and schedule commitments. According to GAO’s Cost Estimating and Assessment Guide, having a complete life-cycle cost estimate is a critical element in the budgeting process that helps decision makers to evaluate resource requirements at milestones and other important decision points. Additionally, a comprehensive cost estimate should include both government and contractor costs of the program over its full life cycle, from inception of the program through design, development, deployment, and operation and maintenance to retirement of the program. However, according to PALMS PMO officials, they did not develop a life- cycle cost estimate for PALMS. In 2012 DHS developed an independent government cost estimate to determine the contractor-related costs to implement the PALMS system across the department (estimated to be approximately $95 million); however, this estimate was not comprehensive because it did not include government-related costs. As a result, DHS was not able to determine the impact on cost when the PALMS program experienced problems (discussed in more detail later), since the baseline cost estimate was incomplete. PALMS PMO officials stated that PALMS did not develop a life-cycle cost estimate because the program is a Level 3 acquisition program and DHS does not require such an estimate for a Level 3 program. However, while DHS acquisition policy does not require a life-cycle cost estimate for a program of this size, we maintain that such an estimate should be prepared because of the program’s risk and troubled history. Without developing a comprehensive life-cycle cost estimate, DHS is limited in making future budget decisions related to PALMS. As described in GAO’s Schedule Assessment Guide, a program’s integrated master schedule is a comprehensive plan of all government and contractor work that must be performed to successfully complete the program. Additionally, such a schedule helps manage program schedule dependencies. Best practices for developing and maintaining this schedule include, among other things, capturing all activities needed to do the work and reviewing the schedule after each update to ensure the schedule is complete and accurate. While DHS had developed an integrated master schedule with the PALMS vendor, it did not appropriately maintain this schedule. Specifically, the program’s schedule was incomplete and inaccurate. While DHS’s original August 2012 schedule planned to fully deploy both the learning and performance management capabilities in one release at each component by March 2015, the program’s September 2015 schedule did not reflect the significant change in PALMS’s deployment strategy and time frames. Specifically, the program now plans to deploy the learning management capabilities first and the performance management capabilities separately and incrementally to headquarters and the components. However, the September 2015 schedule reflected the deployment-related milestones (per component) for only the learning management capabilities and did not include the deployment-related milestones for the performance management capabilities. In September 2015, PALMS officials stated that the deployments related to performance management were not reflected in the program’s schedule because the components had not yet determined when they would deploy these capabilities. Since then, two components have determined their planned dates for deploying these capabilities, but seven (including headquarters) remain unknown. As a result, the program does not know when PALMS will be fully implemented at all components with all capabilities. Table 4 provides a comparison of the program’s initial delivery schedule, as of August 2012, to the program’s latest schedule, as of November 2015. Moreover, the schedule did not include all government-specific activities, including tasks related to employee union activities (such as notifying employee unions and bargaining with them, where necessary) related to the proposed implementation of the performance management capabilities. For example, time frames for when DHS planned to notify employee unions at DHS headquarters, FLETC, and USCIS were not identified in the schedule. In September 2015, PALMS program officials stated that certain government- specific tasks were not included in the schedule because the integrated master schedule was too big and difficult to manage, so the program decided to track certain government activities, such as union negotiation activities, separately. However, without an integrated master schedule that includes all government and contractor work that must be performed, the program is at risk of failing to ensure schedule dependencies are appropriately managed and that all essential activities are completed. Additionally, the August 2015 schedule had incorrect completion dates listed for key activities. For example, DHS reported in the schedule that the actual finish date for deploying the learning management capabilities of the PALMS system at CBP was February 17, 2015; however, according to CBP officials, they did not deploy these capabilities until July 2015. In September 2015, program officials acknowledged our concerns and attributed the inaccurate dates to a lack of oversight; subsequently, the program took actions to update the dates. Without developing and maintaining a single comprehensive schedule that fully integrates all government and contractor activities, and includes all planned deployment milestones related to performance management, DHS is limited in monitoring and overseeing the implementation of PALMS, and managing the dependencies between program tasks and milestones to ensure that it delivers capabilities when expected. According to CMMI-ACQ and the PMBOK® Guide, a key activity for tracking a program’s performance is monitoring the project’s costs by comparing actual costs to the cost estimate. The PALMS PMO—which is responsible for overseeing the PALMS implementation projects across DHS, including all of its components—monitored task order expenditures on a monthly basis. As of December 2015, DHS officials reported that they had awarded approximately $18 million in task orders to the vendor. However, the program management office officials stated that they were not monitoring the government-related costs associated with each of the PALMS implementations. The officials stated that they were not tracking government-related implementation costs at headquarters because many of the headquarters program officials concurrently work on other acquisition projects and these officials are not required to track the amount of time spent working specifically on PALMS. The officials also said that they were not monitoring the government-related costs for each of the component PALMS implementation projects because it would be difficult to obtain and verify the cost data provided by the components. We acknowledge the department’s difficulties associated with obtaining and verifying component cost data; however, monitoring the program’s costs is essential to keeping costs on track and alerting management of potential cost overruns. Additionally, because DHS did not develop a comprehensive life-cycle cost estimate for PALMS that included government-related costs, the program management office was unable to determine cost increases to the program because it could not compare actual cost values against a baseline cost estimate. For example, program officials were unable to identify how much the program’s cost estimate had increased when the implementation at headquarters experienced schedule delays to address deficiencies identified during testing. Without tracking and monitoring all costs associated with PALMS, the department will be unable to compare actual costs against planned estimates and thus, will be limited in its ability to fully monitor the program, which is essential for alerting the program to possible cost overruns and prompting corrective actions. According to CMMI-ACQ and the PMBOK® Guide, key activities in tracking a program’s performance include conducting and documenting the results from progress and milestone reviews to determine whether there are significant issues or performance shortfalls that need to be addressed. Although the PALMS PMO conducted reviews to monitor the program’s performance, it did not consistently document the results of its progress and milestone reviews. For example, The PALMS PMO did not document the results of the status updates that the PMO provided to DHS executives during its bi-weekly integrated project team meetings, so it is unclear whether the program was appropriately monitoring the progress of all government-specific activities. According to PALMS PMO officials, PALMS achieved Initial Operating Capability—which was specified in the contract to be the point when the contractor would deliver an initial set of requirements to the government—in January 2015; however, the review for this major milestone was not documented. In September 2015, program officials stated that the results were not documented because this milestone did not align with the typical Initial Operating Capability milestone that is defined in DHS acquisition guidance. Specifically, DHS’s guidance defines it as when capabilities are first deployed to end users (PALMS capabilities were not deployed to any users until July 2015). Nevertheless, PALMS’s achieving Initial Operating Capability in January 2015 was still considered a major milestone that prompted a review. However, without documenting the results of the milestone review, it is unclear whether any action items were identified during this review and, if so, whether they have all been appropriately managed to closure. Although CBP officials stated that the results of their progress reviews with the vendor were typically documented, CBP was unable to provide the results of the milestone review conducted prior to deploying the PALMS learning management capabilities in July 2015. As such, it is unclear whether any action items were identified during this review and, if so, whether CBP had appropriately managed them to closure. In the absence of documenting PALMS’s progress and milestone reviews, including all issues and corrective actions discussed, the program cannot demonstrate that these issues and corrective actions are appropriately managed. According to CMMI-ACQ and the PMBOK® Guide, key risk management practices include identifying risks, developing mitigation plans, and regularly tracking the status of risks and mitigation efforts. In particular, identifying risks and periodically reviewing them is the basis for sound and successful risk management. Additionally, risk mitigation plans should be developed and implemented when appropriate to proactively reduce the potential impact if a risk were to occur. While PALMS officials had identified program risks, developed associated mitigation plans, and documented them in the HRIT investment-level risk log (which is intended to be the centralized log containing all PALMS risks and mitigation plans, including both government- and vendor-identified risks), the program did not consistently maintain this log. Specifically, the PALMS risks in this log were out of date, the log did not accurately capture the status of all of the risks identified by the program, and it was unclear which risks and associated mitigation plans were being assessed on a monthly basis. For example, In the May 2015 risk log, 16 of the 17 active PALMS risks stated that the last time any action was taken to mitigate or close any of these risks was in 2014. However, the mitigation strategy details for 5 of these active risks included information related to decisions made in 2015. As such, it was unclear which risks and mitigation plans were regularly assessed and updated in the risk log, and when actions were last taken on each of the risks. One of the high-impact and high-probability risks from the May 2015 risk log stated that DHS needed to determine an interim solution for consolidating human resources-related data from DHS’s components by December 2014; however, the status of this risk had not been updated since August 2014 and it was unclear whether this was still a risk or had been realized as an issue. Additionally, while the HRIT investment-level risk management plan identified that the PALMS program was to, among other things, generate weekly status reports to document the status of decisions made during risk review meetings and identify planned completion dates for each step of the risk mitigation plans, the program was not always complying with these processes. For example, the program was not developing the required weekly risk status reports or identifying planned completion dates for its risk mitigation plan steps. Program officials acknowledged that the PALMS risks in the HRIT risk log were out of date and inaccurate, and the program was not complying with all of the documented processes in the HRIT risk management plan. Program officials attributed this to, among other things, the PMO’s focus being on meeting upcoming deadlines; as such, implementing certain processes identified in the HRIT risk management plan were not a priority. However, by not carrying out these key risk management functions, program officials introduced additional risk to the program. In October 2015 and in response to us identifying these issues, PALMS officials stated that they were in the process of validating and updating the risks and mitigation plans in the HRIT risk log to address these issues, as well as were updating their risk management processes to align with the documented processes in the HRIT risk management plan. The program completed this validation update process in October 2015; however, the updated log continued to have these issues. For example, the PALMS PMO had not yet identified the planned completion dates for each mitigation step (where appropriate). Further, this updated log— which is intended to be the program’s centralized log of all government- and vendor-identified PALMS risks—did not contain all of the vendor- identified risks. For example, two component-specific risks that were identified in the vendor-maintained risk log were not included in the program’s centralized risk log. As such, it is unclear whether the program is appropriately managing these risks. Until a comprehensive risk log is established that accurately captures the status of all risks (including both government- and vendor-identified risks) and mitigation plans, and includes planned completion dates for each mitigation step (where appropriate), the program is limited in effectively managing all of its risks. According to CMMI-ACQ and PMBOK® Guide risk management best practices, effective risk management includes evaluating and categorizing risks using defined risk categories and parameters, such as probability and impact, and determining each risk’s relative priority. Risk prioritization helps to determine where resources for risk mitigation can be applied to provide the greatest positive impact on the program. The parameters for evaluating, categorizing, and prioritizing risks should include defined thresholds (e.g., for cost, schedule, performance) that, when exceeded, trigger management attention and mitigation activities. These risk parameters should be documented so that they are available for reference throughout the life of the project and are used to provide common and consistent criteria for prompting management attention. While the PALMS program had categorized its risks and assigned parameters to them, including probability and impact, the program did not prioritize its risks or document criteria for elevating them to management. Specifically, the PALMS PMO did not use the assigned parameters to determine each risk’s relative priority and overall risk level (i.e., high, medium, and low). PALMS officials acknowledged in June 2015 that the risks were not prioritized in the logs, but said, based on the experience of the PALMS PMO staff, officials are able to determine each risk’s priority by reviewing the assigned probability and impact parameters. However, this is an inadequate method for managing risks. Specifically, it introduces unnecessary subjectivity by relying heavily on officials to make prioritization decisions, rather than using the assigned parameters to determine and document each risk’s relative priority. Additionally, the program had not documented criteria for elevating component risks to the program management office. As mentioned earlier, each component is responsible for overseeing its own PALMS implementation project, while the program management office at headquarters is responsible for overseeing the implementation projects across the department. According to program officials, as part of this effort, each component is to follow the risk management processes documented in PALMS’s vendor-developed risk management plan (which is a separate plan from the HRIT-level risk management plan used by the program management office, as discussed earlier). While the PALMS vendor-developed risk management plan directed each component to track risks in a component-specific risk register, the plan did not establish criteria for when component-level risks need to be elevated to the PALMS PMO at headquarters. In September 2015, the PALMS program manager stated that all component-level risks that are rated red (i.e., high-probability and high- impact risks) are reported to headquarters. However, this guidance was not documented and, as such, the PALMS PMO did not have reasonable assurance that the components were knowledgeable about which risks to elevate, and whether the components were appropriately elevating such risks. Program officials were unable to explain why this criterion was not documented, but in response to our concern, the program officials directed the vendor to update the PALMS risk management plan to document this criterion; the vendor completed this update in October 2015. In particular, the plan now specifies that all component-level risks that could impact when the PALMS system is to be deployed at each of the components should be elevated to the PALMS PMO and given a priority of high. Documenting the criteria for when risks need to be elevated to the PALMS PMO should help ensure that all appropriate risks are being elevated for review. However, until the program prioritizes its risks by determining each risk’s relative priority and overall risk level, DHS is hampered in its ability to ensure that the program’s attention and resources for risk mitigation are used in the most effective manner. Although the HRIT investment was initiated about 12 years ago with the intent to consolidate, integrate, and modernize the department’s human resources IT infrastructure, DHS has made very limited progress in achieving these goals. HRIT’s minimally involved executive steering committee during a time when significant problems were occurring was a key factor in the lack of progress. This is particularly problematic given that the department’s ability to efficiently and effectively carry out its mission is significantly hampered by its fragmented human resources system environment and duplicative and paper-based processes. Moreover, DHS’s ineffective management of HRIT, such as the lack of an updated schedule and a life-cycle cost estimate, also contributed to the neglect this investment has experienced. Until DHS, among other things, maintains a schedule, develops a life-cycle cost estimate, tracks costs, and re-evaluates and updates the Human Capital Segment Architecture blueprint, the department will continue to be plagued by duplicative systems and an inefficient and ineffective human resources environment impacting in its ability to perform its mission. Additionally, until the PALMS program effectively addresses identified weaknesses in its project planning, project monitoring, and risk management practices and implements PALMS department-wide, DHS’s performance management processes will continue to be cumbersome, time-consuming, and primarily paper-based. Further, DHS will be limited in efficiently tracking and reporting accurate, comprehensive performance and learning management data across the organization, and could risk further implementation delays. To ensure that the HRIT investment receives necessary oversight and attention, we are recommending that the Secretary of Homeland Security direct the Under Secretary of Management take the following two actions: Update the HRIT executive steering committee charter to establish the frequency with which HRIT executive steering committee meetings are to be held. Ensure that the HRIT executive steering committee is consistently involved in overseeing and advising HRIT, including approving key program management documents, such as HRIT’s operational plan, schedule, and planned cost estimate. To address HRIT’s poor progress and ineffective management, we are recommending that the Secretary of Homeland Security direct the Under Secretary of Management to direct the Chief Human Capital Officer to direct the HRIT investment take the following six actions: Update and maintain a schedule estimate for when DHS plans to implement each of the strategic improvement opportunities. Develop a complete life-cycle cost estimate for the implementation of HRIT. Document and track all costs, including components’ costs, associated with HRIT. Establish time frames for re-evaluating the strategic improvement opportunities and associated projects in the Human Capital Segment Architecture Blueprint and determining how to move forward with HRIT. Evaluate the strategic improvement opportunities and projects within the Human Capital Segment Architecture Blueprint to determine whether they and the goals of the blueprint are still valid and reflect DHS’s HRIT priorities going forward, and update the blueprint accordingly. Update and maintain the department’s human resources system inventory. To improve the PALMS program’s implementation of IT acquisition best practices, we are recommending that the Secretary of Homeland Security direct the Under Secretary of Management to direct the Chief Information Officer to direct the PALMS program office to take the following six actions: Establish a time frame for deciding whether PALMS will be fully deployed at FEMA and USCG, and determine an alternative approach if the learning and/or performance management capabilities of PALMS are deemed not feasible for ICE, FEMA, TSA, or USCG. Develop a comprehensive life-cycle cost estimate, including all government and contractor costs, for the PALMS program. Develop and maintain a single comprehensive schedule that includes all government and contractor activities, and includes all planned deployment milestones related to performance management. Track and monitor all costs associated with the PALMS program. Document PALMS’s progress and milestone reviews, including all issues and corrective actions discussed. Establish a comprehensive risk log that maintains an aggregation of all up-to-date risks (including both government- and vendor-identified) and associated mitigation plans. Additionally, within the comprehensive risk log, identify and document planned completion dates for each risk mitigation step (where appropriate), and prioritize the risks by determining each risk’s relative priority and overall risk level. We received written comments on a draft of this report from the Director of DHS’s Departmental GAO-OIG Liaison Office. The comments are reprinted in appendix II. In its comments, the department concurred with our 14 recommendations and provided estimated completion dates for implementing each of them. For example, by April 30, 2016, the Under Secretary of Management plans to ensure that the HRIT executive steering committee is consistently involved in overseeing and advising HRIT and the committee is expected to be reviewed quarterly by the Acquisition Review Board. These planned actions, if implemented effectively, should help DHS address the intent of our recommendations. We also received technical comments from DHS headquarters and component officials, which we have incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Homeland Security and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. Should you or your staffs have any questions on information discussed in this report, please contact Carol Cha at (202) 512-4456, ChaC@gao.gov or Rebecca Gambler at (202) 512-6912, GamblerR@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objectives were to (1) evaluate the progress the Department of Homeland Security (DHS) has made in implementing the Human Resources Information Technology (HRIT) investment and how effectively DHS has managed the investment since completing the Human Capital Segment Architecture in August 2011, (2) describe whether DHS has justified its investment in the Performance and Learning Management System (PALMS) program, (3) determine whether PALMS is being implemented enterprise-wide, and (4) evaluate the extent to which PALMS is implementing selected information technology (IT) acquisition best practices. To address the first part of our first objective—to evaluate the progress DHS had made in implementing the HRIT investment—we compared HRIT’s goals, scope, and implementation time frames (as defined in the Human Capital Segment Architecture Blueprint, which was completed in August 2011) to the investment’s actual accomplishments. Specifically, we compared the completed and in-progress HRIT projects against the strategic improvement opportunities and projects that were outlined in the blueprint to determine which of the improvement opportunities and projects had been fully implemented or were in-progress. We also compared DHS’s planned schedule for implementing the improvement opportunities and projects against DHS’s current planned schedule for implementing them as of November 2015. Additionally, we interviewed DHS officials from the HRIT investment, Office of the Chief Information Officer (OCIO), Office of the Chief Human Capital Officer (OCHCO), and DHS’s components to discuss the steps taken to implement HRIT, address the strategic improvement opportunities and projects in the blueprint, and meet the goals of the investment. In addressing the second part of our first objective—to evaluate how effectively DHS managed the investment—we analyzed documentation, such as the investment’s planned and updated completion dates, program management briefings, the blueprint, cost estimates, and budget documentation, and compared it against relevant cost and schedule best practices identified by GAO, CMMI-ACQ, and the PMBOK® Guide. These best practices included developing and maintaining a schedule estimate; developing a life-cycle cost estimate; and tracking program expenditures. To determine the amount spent to date on HRIT, we asked officials from DHS headquarters and each of the eight components to provide expenditure information on HRIT since the investment began in 2003; officials were unable to provide complete information. As such, we were unable to identify the total amount spent on the investment and discuss this limitation earlier in the report. We also analyzed DHS’s human capital investment guidance, including the 2010 Deputy Secretary memorandum that prohibited component spending on enhancements to existing human resources systems or acquisitions of new human resources solutions, unless those expenditures have been approved by OCHCO or OCIO, and compared it to the components’ current investments in human resources systems, such as those listed in DHS’s fiscal year 2016 human capital portfolio. Additionally, we interviewed officials from the OCIO, OCHCO, and DHS’s eight components to obtain additional information on how HRIT reduced or will reduce duplicative human resources systems. To describe whether DHS justified its investment in the PALMS program, we analyzed documentation, such as the program’s business case and the documented analysis of alternatives that was conducted to identify recommended approaches for pursuing a commercial off-the-shelf learning management system. We used this information to determine the various alternative solutions that DHS assessed for delivering enterprise- wide performance and learning management capabilities. Additionally, we reviewed program management briefings provided to the HRIT Executive Steering Committee that outlined, for example, the proposed solution and rationale for such a solution. We also interviewed appropriate DHS and PALMS officials for further information regarding the process DHS used to conduct the analysis of alternatives and other steps the department took to determine its preferred solution, including determining whether DHS could use existing shared services that were being used by other federal agencies. To determine whether PALMS is being implemented department-wide, we analyzed the program’s acquisition plan and original schedule for implementing the system department-wide, and compared it against actual program status documentation and the program’s current implementation schedule. We also obtained and analyzed information from DHS officials, the PALMS headquarters program management office and DHS’s components on each component’s implementation of PALMS, including identifying which PALMS capabilities each component planned to implement, the number of planned PALMS users, and their reported causes for why certain components were not currently planning to implement PALMS. To evaluate the extent to which PALMS implemented selected IT acquisition best practices, we analyzed the program’s IT acquisition documentation and compared it to relevant project planning, project monitoring, and risk management best practices—including CMMI-ACQ and PMBOK® Guide practices, and best practices identified by GAO. Specifically, we analyzed program documentation, including the acquisition plan, requirements management plan, risk management plan, cost and schedule estimates, program management review briefings, meeting minutes, risk logs, and risk mitigation plans to determine the extent to which the program’s acquisition processes were consistent with the best practices. Additionally, we interviewed officials from HRIT, PALMS, OCIO, OCHCO, and DHS’s eight components to obtain additional information on the program’s risk management, project planning, and project monitoring processes. To assess the reliability of the data that we used to support the findings in this report, we reviewed relevant program documentation to substantiate evidence obtained through interviews with agency officials. We determined that the data used in this report were sufficiently reliable, with the exception of expenditure information provided by the HRIT investment and selected risk data provided by the PALMS program. We discuss limitations with these data in the report. We have also made appropriate attribution indicating the sources of the data. We conducted this performance audit from March 2015 to February 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contacts named above, the following staff also made key contributions to this report: Shannin O’Neill, Assistant Director; Christopher Businsky; Rebecca Eyler; Javier Irizarry; Emily Kuhn; and David Lysy.
DHS's human resources administrative environment includes fragmented systems, duplicative and paper-based processes, and little uniformity of data management practices, which according to DHS, are compromising the department's ability to effectively carry out its mission. DHS initiated HRIT in 2003 to consolidate, integrate, and modernize DHS's human resources information technology infrastructure. In 2011, DHS redefined HRIT's scope and implementation time frames. GAO was asked to review DHS's efforts to implement the HRIT investment. GAO's objectives included, among others, evaluating the progress DHS has made in implementing the HRIT investment. GAO compared HRIT's goals and scope to the investment's actual accomplishments, and compared DHS's planned schedule for implementing strategic improvement opportunities (key areas identified by DHS as needing improvement) against its current schedule. The Department of Homeland Security (DHS) has made very little progress in implementing its Human Resources Information Technology (HRIT) investment in the last several years. This investment includes 15 improvement opportunities; as of November 2015, DHS had fully implemented only 1, see table below. Key: ●Fully implemented ◐Partially implemented ○Not yet started Source: GAO analysis of data provided by DHS officials. a Dates reflect the last month of the quarter in which the opportunities were planned to be complete. HRIT's limited progress was due in part to the lack of involvement of its executive steering committee—the investment's core oversight and advisory body—which was minimally involved with HRIT, such as meeting only one time during a nearly 2-year period when major problems, including schedule delays, were occurring. As a result, key governance activities, such as approval of HRIT's operational plan, were not completed. Officials acknowledged that HRIT should be re-evaluated and took early steps to do so (i.e., meeting to discuss the need to re-evaluate); however, specific actions and time frames have not been determined. Until DHS takes key actions to re-evaluate and manage this neglected investment, it is unknown when its human capital weaknesses will be addressed. GAO is making 14 recommendations to DHS to, among other things, address HRIT's poor progress and ineffective management. For example, GAO recommends that the HRIT executive steering committee is consistently involved in overseeing and advising the investment. In addition, GAO recommends DHS evaluate the HRIT investment to determine whether its goals are still valid and reflect the department's priorities. DHS concurred with all 14 recommendations and provided estimated completion dates for implementing each of them.
In light of delays in completing security clearance background investigations and adjudicative decisions, as well as a significant backlog of clearances to be processed, Congress passed the Intelligence Reform and Terrorism Prevention Act of 2004 (IRTPA), which set objectives and established requirements for improving the personnel security clearance process, including improving the timeliness of the clearance process, achieving interagency reciprocity, establishing an integrated database to track investigative and adjudicative information, and evaluating available technology for investigations and adjudications. In July 2008, Executive Order 13467 designated the DNI as the Security Executive Agent, who is responsible for developing uniform and consistent policies and procedures to ensure the effective, efficient, and timely completion of background investigations and adjudications relating to determinations of eligibility for access to classified information and eligibility to hold a sensitive position. Additionally, the order designated the Director of OPM as the Suitability Executive Agent. Determinations of suitability for government employment include consideration of aspects of an individual’s character or conduct. Accordingly, the Suitability Executive Agent is responsible for developing and implementing uniform and consistent policies and procedures to ensure the effective, efficient, and timely completion of investigations and adjudications relating to determinations of suitability. The order also established a Suitability and Security Clearance Performance Accountability Council, commonly known as the Performance Accountability Council, to be the government-wide governance structure responsible for driving implementation and overseeing security and suitability reform efforts. Further, the executive order designated the Deputy Director for Management at the Office of Management and Budget (OMB) as the chair of the council and states that agency heads shall assist the Performance Accountability Council and Executive Agents in carrying out any function under the order, as well as implementing any policies or procedures developed pursuant to the order. The relevant orders and regulations that guide the process for designating national security positions include executive orders and federal regulations. For example, Executive Order 10450, which was originally issued in 1953, makes the heads of departments or agencies responsible for establishing and maintaining effective programs for ensuring that civilian employment and retention is clearly consistent with the interests of national security. Agency heads are also responsible for designating positions within their respective agencies as sensitive if the occupant of that position could, by virtue of the nature of the position, bring about a material adverse effect on national security. In addition, Executive Order 12968, issued in 1995, makes the heads of agencies—including executive branch agencies and the military departments—responsible for establishing and maintaining an effective program to ensure that access to classified information by each employee is clearly consistent with the interests of national security. This order also states that, subject to certain exceptions, eligibility for access to classified information shall only be requested and granted on the basis of a demonstrated, foreseeable need for access. Further, part 732 of Title 5 of the Code of Federal Regulations provides requirements and procedures for the designation of national security positions, which include positions that (1) involve activities of the government that are concerned with the protection of the nation from foreign aggression or espionage, and (2) require regular use of or access to classified national security information. Part 732 of Title 5 of the Code of Federal Regulations also states that most federal government positions that could bring about, by virtue of the nature of the position, a material adverse effect on national security must be designated as a sensitive position and require a sensitivity level designation. The sensitivity level designation determines the type of background investigation required, with positions designated at a greater sensitivity level requiring a more extensive background investigation. Part 732 establishes three sensitivity levels—special-sensitive, critical- sensitive, and noncritical-sensitive—which are described in figure 1. According to OPM, positions that an agency designates as special- sensitive and critical-sensitive require a background investigation that typically results in a top secret clearance. Noncritical-sensitive positions typically require an investigation that supports a secret or confidential clearance. OPM also defines non-sensitive positions that do not have a national security element, but still require a designation of risk for suitability purposes. That risk level informs the type of investigation required for those positions. Those investigations include aspects of an individual’s character or conduct that may have an effect on the integrity or efficiency of the service. As previously mentioned, DOD and DHS grant the most security clearances. Figure 1 illustrates the process used by both DOD and DHS to determine the need for a personnel security clearance for a federal civilian position generally used government-wide. During the course of our 2012 review, we found that the executive branch had not issued clearly defined policy guidance for determining when a federal civilian position needs a security clearance. In the absence of such guidance, agencies are using a position designation tool that OPM designed to determine the sensitivity and risk levels of civilian positions that, in turn, inform the type of investigation needed. Further, we found that OPM’s position designation tool lacked input from the DNI and that audits had revealed problems with the use of OPM’s tool, leading to some incorrect position designations. The first step in the personnel security clearance process is to determine if the occupant of a federal position needs a security clearance to effectively and efficiently conduct work. However, we found in July 2012 that the DNI had not provided agencies with clearly defined policy through regulation or other guidance to help ensure that executive branch agencies use appropriate and consistent criteria when determining if positions require a security clearance. According to Executive Order 13467, issued in June 2008, the DNI, as the Security Executive Agent, is responsible for developing uniform policies and procedures to ensure the effective, efficient, and timely completion of investigations and adjudications relating to determinations of eligibility for access to classified information or to hold a sensitive position. Further, the order states that agency heads shall assist the Performance Accountability Council and Executive Agents in carrying out any function under the order, as well as implementing any policies or procedures developed pursuant to the order. Although agency heads retain the flexibility to make determinations regarding which positions in their agency require a security clearance, the DNI, in its capacity as Security Executive Agent, is well positioned to provide guidance to help align the personnel security clearance process. Determining the requirements of a federal position includes assessing both the risk and sensitivity level associated with a position, which includes consideration of whether that position requires access to classified information and, if required, the level of access. Security clearances are generally categorized into three levels of access: top secret, secret, and confidential. The level of classification denotes the degree of protection required for information and the amount of damage that unauthorized disclosure could reasonably be expected to cause to national defense or foreign relations. In the absence of clearly defined guidance to help ensure that executive branch agencies use appropriate and consistent criteria when determining if positions require a personnel security clearance, agencies are using an OPM-designed tool to determine the sensitivity and risk levels of civilian positions which, in turn, inform the type of investigation needed. We reported in July 2012 that in order to assist with position designation, the Director of OPM—the Executive Agent for Suitability— has developed a process that includes a position designation system and corresponding automated tool to guide agencies in determining the proper sensitivity level for the majority of federal positions. This tool— namely, the Position Designation of National Security and Public Trust Positions—enables a user to evaluate a position’s national security and suitability requirements so as to determine a position’s sensitivity and risk levels, which in turn dictate the type of background investigation that will be required for the individual who will occupy that position. In most agencies outside the Intelligence Community, OPM conducts the background investigations for both suitability and security clearance purposes. The tool does not directly determine whether a position requires a clearance, but rather helps determine the sensitivity level of the position. The determination to grant a clearance is based on whether a position requires access to classified information and, if access is required, the responsible official will designate the position to require a clearance. OPM developed the position designation system and automated tool for multiple reasons. First, OPM determined through a 2007 initiative that its existing regulations and guidance for position designation were complex and difficult to apply, resulting in inconsistent designations. As a result of a recommendation from the initiative, OPM created a simplified position designation process in 2008. Additionally, OPM officials noted that the tool is to support the goals of the security and suitability reform efforts, which require proper designation of national security and suitability positions. OPM first introduced the automated tool in November 2008, and issued an update of the tool in 2010. In August 2010, OPM issued guidance (1) recommending all agencies that request OPM background investigations use the tool, and (2) requiring agencies to use the tool for all positions in the competitive service, positions in the excepted service where the incumbent can be noncompetitively converted to the competitive service, and career appointments in the Senior Executive Service. Both DOD and DHS components use the tool. In addition, DOD issued guidance in September 2011 and August 2012 requiring its personnel to use OPM’s tool to determine the proper position sensitivity designation. A DHS instruction requires personnel to designate all DHS positions—including positions in the DHS components—by using OPM’s position sensitivity designation guidance, which is the basis of the tool. Office of the Director of National Intelligence (ODNI) officials told us that they believe OPM’s tool is useful for determining a position’s sensitivity level. However, although the DNI was designated as the Security Executive Agent in 2008, ODNI officials noted that the DNI did not have input into recent revisions of OPM’s position designation tool. This lack of coordination for revising the tool exists, in part, because the execution of the roles and relationships between the Director of OPM and the DNI as Executive Agents are still evolving, although Executive Order 13467 defines responsibilities for each Executive Agent. Accordingly, we found in July 2012 that the Director of OPM and the DNI had not fully collaborated in executing their respective roles in the process for determining position designations. For example, OPM has had long- standing responsibility for establishing standards with respect to suitability for most federal government positions. Accordingly, the sections of the tool to be used for evaluating a position’s suitability risk level are significantly more detailed than the sections designed to aid in designating the national security sensitivity level of the position. While most of OPM’s position designation system, which is the basis of the tool, is devoted to suitability issues, only two pages are devoted to national security issues. Moreover, OPM did not seek to collaborate with the DNI when updating the tool in 2010. During our review completed in 2012, human capital and security officials from DOD and DHS and the selected components we examined affirmed that they were using the existing tool to determine the sensitivity level required by a position. However, in the absence of clearly defined policy from the DNI and the lack of collaborative input into the tool’s design, officials explained that they sometimes had difficulty in using the tool to designate the sensitivity level of national security positions. OPM regularly conducts audits of its executive branch customer agency personnel security and suitability programs, which include a review of position designation to assess the agencies’ alignment with OPM’s position designation guidance. In the audit reports we obtained as part of our 2012 review, OPM found examples of inconsistency between agency position designation and OPM guidance, both before and after the implementation of OPM’s tool. For instance, prior to the implementation of the tool, in a 2006 audit of an executive branch agency, OPM found that its sensitivity designations differed from the agency’s designation in 13 of 23 positions. More recently, after the implementation of the tool, in an April 2012 audit of a DOD agency, OPM assessed the sensitivity levels of 39 positions, and OPM’s designations differed from the agency’s designations in 26 of those positions. In the April 2012 report, the DOD agency agreed with OPM’s recommendations related to position designation, and the audit report confirmed that the agency had submitted evidence of corrective action in response to the position designation recommendations. OPM provided us with the results of 10 audits that it had conducted between 2005 and 2012, and 9 of those audit reports reflected inconsistencies between OPM position designation guidance and determinations of position sensitivity conducted by the agency. OPM officials noted, however, that they do not have the authority to direct agencies to make different designations because Executive Order 10450 provides agency heads with the ultimate responsibility for designating which positions are sensitive positions. ODNI conducted a separate position designation audit in response to the Intelligence Authorization Act for Fiscal Year 2010. In that 2011 report, ODNI found that the processes the executive branch agencies followed differed somewhat depending whether the position was civilian, military, or contractor. During the course of our 2012 review, DOD and DHS officials raised concerns regarding the guidance provided through the tool and expressed that they had difficulty implementing it. Specifically, officials from DHS’s U.S. Immigration and Customs Enforcement stated that the use of the tool occasionally resulted in inconsistency, such as over- or underdesignating a position, and expressed a need for additional clear, easily interpreted guidance on designating national security positions. DOD officials stated that they have had difficulty implementing the tool because it focuses more on suitability than security, and the national security aspects of DOD’s positions are of more concern to them than the suitability aspects. Further, an official from DOD’s Office of the Under Secretary of Defense for Personnel and Readiness stated that the tool and DOD policy do not always align and that the tool does not cover the requirements for some DOD positions. For example, DOD’s initial implementing guidance on using the tool stated that terms differ between DOD’s personnel security policy and the tool, and the tool might suggest different position sensitivity levels than DOD policy required. Also, officials from the Air Force Personnel Security Office told us that they had challenges using the tool to classify civilian positions, including difficulty in linking the tool with Air Force practices for position designation. Moreover, an Air Force official stated a concern that the definition for national security positions is broadly written and could be considered to include all federal positions. Because we found that the executive branch had not provided clear guidance for the designation of national security positions, we recommended that the DNI, in coordination with the Director of OPM and other executive branch agencies as appropriate, issue clearly defined policy and procedures for federal agencies to follow when determining if federal civilian positions require a security clearance. In written comments on our July 2012 report, the ODNI concurred with this recommendation and agreed that executive branch agencies require simplified and uniform policy guidance to assist in determining appropriate sensitivity designations. We routinely monitor the status of agency actions to address our prior report recommendations. As part of that process, we found that a January 25, 2013 presidential memo authorized the DNI and OPM to jointly issue revisions to part 732 of Title 5 of the Code of Federal Regulations, which is intended to provide requirements and procedures for the designation of national security positions. Subsequently, ODNI and OPM drafted the proposed regulation, published it in the Federal Register on May 28, 2013, and obtained public comment on the regulation through June 27, 2013. ODNI and OPM officials told us they plan to jointly adjudicate public comments and prepare the final regulation for approval from OMB during October 2013. In reviewing the proposed regulation, we found that it would, if finalized in its current form, meet the intent of our recommendation to issue clearly defined policy and procedures for federal agencies to follow when determining if federal civilian positions require a security clearance. Specifically, the proposed regulation appears to add significant detail regarding the types of duties that would lead to a critical-sensitive designation, or those national security positions which have the potential to cause exceptionally grave damage to national security. Critical- sensitive positions detailed in the proposed regulation include positions that develop or approve war plans, major or special military operations, or critical and extremely important items of war, involve national security policy-making or policy-determining positions, with investigative duties, including handling of completed counter- intelligence or background investigations, having direct involvement with diplomatic relations and negotiations, in which the occupants have the ability to independently damage public health and safety with devastating results, and in which the occupants have the ability to independently compromise or exploit biological select agents or toxins, chemical agents, nuclear materials, or other hazardous materials, among several others. Further, we also recommended in 2012 that once clear policy and procedures for position designation are issued, the DNI and the Director of OPM should collaborate in their respective roles as Executive Agents to revise the position designation tool to reflect that guidance. ODNI concurred with this recommendation in its written comments on our report and stated that it planned to work with OPM and other executive branch agencies to develop a position designation tool that provides detailed descriptions of the types of positions where the occupant could bring about a material adverse impact to national security due to the duties and responsibilities of that position. OPM also concurred with this recommendation, stating that it was committed to revising the tool after revisions to position designation regulations are complete. The proposed revisions to part 732 of Title 5 of the Code of Federal Regulations appeared in the Federal Register, but have not yet been issued, and we recommended that the position designation tool be revised once policies and procedures for position designation are issued. We note that the proposed regulation states that OPM issues, and periodically revises, a Position Designation System, which describes in greater detail agency requirements for designating positions that could bring about a material adverse effect on the national security. Further, the proposed regulation would require that agencies use OPM’s Position Designation System to designate the sensitivity level of each position covered by the regulation. As part of our ongoing processes to monitor agency actions in response to our recommendations, ODNI and OPM officials told us that actions were underway to revise the tool. For example, officials stated that an interagency working group had been established to oversee the updates to the current tool, while also determining the way forward to creating a new tool, and that officials were developing a project plan to guide the revision process. We plan to continue to review OPM guidance on the Position Designation System and to review steps taken by OPM and the DNI to revise the associated position designation tool to determine if the revised regulation and actions taken to revise the tool meet the intent of our recommendation. In July 2012, we reported that the executive branch did not have a consistent process for reviewing and validating existing security clearance requirements for federal civilian positions. According to Executive Order 12968, the number of employees that each agency determines is eligible for access to classified information shall be kept to the minimum required, and, subject to certain exceptions, eligibility shall be requested or granted only on the basis of a demonstrated, foreseeable need for access. Additionally, Executive Order 12968 states that access to classified information shall be terminated when an employee no longer has a need for access, and that requesting or approving eligibility for access in excess of the actual requirements is prohibited. Also, Executive Order 13467 authorizes the DNI to issue guidelines or instructions to the heads of agencies regarding, among other things, uniformity in determining eligibility for access to classified information. However, we reported in 2012 that the DNI had not issued policies and procedures for agencies to periodically review and revise or validate the existing clearance requirements for their federal civilian positions to ensure that clearances are 1) kept to a minimum and 2) reserved only for those positions with security clearance requirements that are in accordance with the national security needs of the time. Position descriptions not only identify the major duties and responsibilities of the position, but they also play a critical role in an agency’s ability to recruit, develop, and retain the right number of individuals with the necessary skills and competencies to meet its mission. Position descriptions may change, as well as the national security environment as observed after September 11, 2001. During our 2012 review of several DOD and DHS components, we found that officials were aware of the requirement to keep the number of security clearances to a minimum but were not always subject to a standard requirement to review and validate the security clearance needs of existing positions on a periodic basis. We found, instead, that agencies’ policies provide for a variety of practices for reviewing the clearance needs of federal civilian positions. In addition, agency officials told us that their policies are implemented inconsistently. DOD’s personnel security regulation and other guidance provides DOD components with criteria to consider when determining whether a position is sensitive or requires access to classified information, and some DOD components also have developed their own guidance. For example, we found that: An Air Force Instruction requires commanders to review all military and civilian position designations annually to ensure proper level of access to classified information. The Army issued a memorandum in 2006 that required an immediate review of position sensitivity designations for all Army civilian positions by the end of the calendar year and requires subsequent reviews biennially. That memorandum further states that if a review warrants a change in position sensitivity affecting an individual’s access to classified information, then access should be administratively adjusted and the periodic reinvestigation submitted accordingly. However, officials explained that improper position sensitivity designations continue to occur in the Army because they have a limited number of personnel in the security office relative to workload, and they only spot check clearance requests to ensure that they match the level of clearance required. Officials from DOD’s Washington Headquarters Services told us that they have an informal practice of reviewing position descriptions and security designations for vacant or new positions, but they do not have a schedule for conducting periodic reviews of personnel security designations for already-filled positions. According to DHS guidance, supervisors are responsible for ensuring that (1) position designations are updated when a position undergoes major changes (e.g., changes in missions and functions, job responsibilities, work assignments, legislation, or classification standards), and (2) position security designations are assigned as new positions are created. Some components have additional requirements to review position designation more regularly to cover positions other than those newly created or vacant. For example, U.S. Coast Guard guidance states that hiring officials and supervisors should review position descriptions even when there is no vacancy and, as appropriate, either revise or review them. According to officials in U.S. Immigration and Customs Enforcement, supervisors are supposed to review position descriptions annually during the performance review process to ensure that the duties and responsibilities on the position description are up-to-date and accurate. However, officials stated that U.S. Immigration and Customs Enforcement does not have policies or requirements in place to ensure any particular level of detail in that review. Some of the components we met with as part of our 2012 review were, at that time, in the process of conducting a onetime review of position designations. In 2012, Transportation Security Administration officials stated that they reevaluated all of their position descriptions during the last 2 years because the agency determined that the re-evaluation of its position designations would improve operational efficiency by ensuring that positions were appropriately designated by using OPM’s updated position designation tool. Further, those officials told us that they review position descriptions as positions become vacant or are created. Between fiscal years 2010 and 2011, while the Transportation Security Administration’s overall workforce increased from 61,586 to 66,023, the number of investigations for top secret clearances decreased from 1,483 to 1,127. Conducting background investigations is costly. The federal government spent over $1 billion to conduct background investigations in fiscal year 2011. Furthermore, this does not include the costs for the adjudication or other phases of the personnel security clearances process. DOD and DHS officials acknowledged that overdesignating a position can result in expenses for unnecessary investigations. When a position is overdesignated, additional resources are unnecessarily spent conducting the investigation and adjudication of a background investigation that exceeds agency requirements. Specifically, the investigative workload for a top secret clearance is about 20 times greater than that of a secret clearance because it must be periodically reinvestigated twice as often as secret clearance investigations (every 5 years versus every 10 years) and requires 10 times as many investigative staff hours. The fiscal year 2014 base price for an initial top secret clearance investigation conducted by OPM is $3,959 and the cost of a periodic reinvestigation is $2,768. The base price of an investigation for a secret clearance is $272. If issues are identified during the course of an investigation for a secret clearance, additional costs may be incurred. Agencies employ varying practices because the DNI has not established a requirement that executive branch agencies consistently review and revise or validate existing position designations on a recurring basis. Such a recurring basis could include reviewing position designations during the periodic reinvestigation process. Without a requirement to consistently review, revise, or validate existing security clearance position designations, executive branch agencies—such as DOD and DHS—may be hiring and budgeting for both initial and periodic security clearance investigations using position descriptions and security clearance requirements that do not reflect national security needs. Finally, since reviews are not being done consistently, DOD, DHS, and other executive branch agencies cannot have reasonable assurance that they are keeping to a minimum the number of positions that require security clearances on the basis of a demonstrated and foreseeable need for access. Therefore, we recommended in July 2012 that the DNI, in coordination with the Director of OPM and other executive branch agencies as appropriate, issue guidance to require executive branch agencies to periodically review and revise or validate the designation of all federal civilian positions. In written comments on that report, the ODNI concurred with this recommendation and stated that as duties and responsibilities of federal positions may be subject to change, it planned to work with OPM and other executive branch agencies to ensure that position designation policies and procedures include a provision for periodic reviews. OPM stated in its written comments to our report that it would work with the DNI on guidance concerning periodic reviews of existing designations, once pending proposed regulations are finalized. ODNI and OPM are currently in the process of finalizing revisions to the position designation federal regulation. As part of our ongoing processes to routinely monitor the status of agency actions to address our prior recommendations, we note that the proposed regulation would newly require agencies to conduct a one-time reassessment of position designations within 24 months of the final regulation’s effective date, which is an important step towards ensuring that the current designations of national security positions are accurate. However, the national security environment and the duties and descriptions of positions may change over time, thus the importance of periodic review or validation. The proposed regulation does not appear to require a periodic reassessment of positions’ need for access to classified information as we recommended. We believe this needs to be done and, as part of monitoring the status of our recommendation, we will continue to review the finalized federal regulation and any related guidance that directs position designation to determine whether periodic review or validation is required. In conclusion, the correct designation of national security positions is a critical first step for safeguarding national security and preventing unnecessary and costly background investigations. We are encouraged that in response to our recommendations, ODNI and OPM have drafted a revised federal regulation and plan to jointly address comments and finalize these regulations. We will continue to monitor the outcome of the final federal regulation as well as other agency actions to address our remaining recommendations. Chairman Tester, Ranking Member Portman, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions that you or the other Members of the Subcommittee may have at this time. For further information on this testimony, please contact Brenda S. Farrell, Director, Defense Capabilities and Management, who may be reached at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony include Lori A. Atkinson (Assistant Director), Renee Brown, Sara Cradic, Jeffrey Heit, Erik Wilkins-McKee, Suzanne M. Perkins, and Michael Willems. Personnel Security Clearances: Opportunities Exist to Improve Quality Throughout the Process. GAO-14-186T. Washington, D.C.: November 13, 2013. Personnel Security Clearances: Full Development and Implementation of Metrics Needed to Measure Quality of Process. GAO-14-157T. Washington, D.C.: October 31, 2013. Personnel Security Clearances: Further Actions Needed to Improve the Process and Realize Efficiencies. GAO-13-728T. Washington, D.C.: June 20, 2013. Managing for Results: Agencies Should More Fully Develop Priority Goals under the GPRA Modernization Act. GAO-13-174. Washington, D.C.: April 19, 2013. Security Clearances: Agencies Need Clearly Defined Policy for Determining Civilian Position Requirements. GAO-12-800. Washington, D.C.: July 12, 2012. Personnel Security Clearances: Continuing Leadership and Attention Can Enhance Momentum Gained from Reform Effort. GAO-12-815T. Washington, D.C.: June 21, 2012. 2012 Annual Report: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12-342SP. Washington, D.C.: February 28, 2012. Background Investigations: Office of Personnel Management Needs to Improve Transparency of Its Pricing and Seek Cost Savings. GAO-12-197. Washington, D.C.: February 28, 2012. GAO’s 2011 High-Risk Series: An Update. GAO-11-394T. Washington, D.C.: February 17, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 16, 2011. Personnel Security Clearances: Overall Progress Has Been Made to Reform the Governmentwide Security Clearance Process. GAO-11-232T. Washington, D.C.: December 1, 2010. Personnel Security Clearances: Progress Has Been Made to Improve Timeliness but Continued Oversight Is Needed to Sustain Momentum. GAO-11-65. Washington, D.C.: November 19, 2010. DOD Personnel Clearances: Preliminary Observations on DOD’s Progress on Addressing Timeliness and Quality Issues. GAO-11-185T. Washington, D.C.: November 16, 2010. Personnel Security Clearances: An Outcome-Focused Strategy and Comprehensive Reporting of Timeliness and Quality Would Provide Greater Visibility over the Clearance Process. GAO-10-117T. Washington, D.C.: October 1, 2009. Personnel Security Clearances: Progress Has Been Made to Reduce Delays but Further Actions Are Needed to Enhance Quality and Sustain Reform Efforts. GAO-09-684T. Washington, D.C.: September 15, 2009. Personnel Security Clearances: An Outcome-Focused Strategy Is Needed to Guide Implementation of the Reformed Clearance Process. GAO-09-488. Washington, D.C.: May 19, 2009. DOD Personnel Clearances: Comprehensive Timeliness Reporting, Complete Clearance Documentation, and Quality Measures Are Needed to Further Improve the Clearance Process. GAO-09-400. Washington, D.C.: May 19, 2009. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 2009. Personnel Security Clearances: Preliminary Observations on Joint Reform Efforts to Improve the Governmentwide Clearance Eligibility Process. GAO-08-1050T. Washington, D.C.: July 30, 2008. Personnel Clearances: Key Factors for Reforming the Security Clearance Process. GAO-08-776T. Washington, D.C.: May 22, 2008. Employee Security: Implementation of Identification Cards and DOD’s Personnel Security Clearance Program Need Improvement. GAO-08-551T. Washington, D.C.: April 9, 2008. Personnel Clearances: Key Factors to Consider in Efforts to Reform Security Clearance Processes. GAO-08-352T. Washington, D.C.: February 27, 2008. DOD Personnel Clearances: DOD Faces Multiple Challenges in Its Efforts to Improve Clearance Processes for Industry Personnel. GAO-08-470T. Washington, D.C.: February 13, 2008. DOD Personnel Clearances: Improved Annual Reporting Would Enable More Informed Congressional Oversight. GAO-08-350. Washington, D.C.: February 13, 2008. DOD Personnel Clearances: Delays and Inadequate Documentation Found for Industry Personnel. GAO-07-842T. Washington, D.C.: May 17, 2007. High-Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. DOD Personnel Clearances: Additional OMB Actions Are Needed to Improve the Security Clearance Process. GAO-06-1070. Washington, D.C.: September 28, 2006. DOD Personnel Clearances: New Concerns Slow Processing of Clearances for Industry Personnel. GAO-06-748T. Washington, D.C.: May 17, 2006. DOD Personnel Clearances: Funding Challenges and Other Impediments Slow Clearances for Industry Personnel. GAO-06-747T. Washington, D.C.: May 17, 2006. DOD Personnel Clearances: Government Plan Addresses Some Long- standing Problems with DOD’s Program, But Concerns Remain. GAO-06-233T. Washington, D.C.: November 9, 2005. DOD Personnel Clearances: Some Progress Has Been Made but Hurdles Remain to Overcome the Challenges That Led to GAO’s High-Risk Designation. GAO-05-842T. Washington, D.C.: June 28, 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005. DOD Personnel Clearances: Preliminary Observations Related to Backlogs and Delays in Determining Security Clearance Eligibility for Industry Personnel. GAO-04-202T. Washington, D.C.: May 6, 2004. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Personnel security clearances allow individuals access to classified information that, through unauthorized disclosure, can in some cases cause exceptionally grave damage to U.S. national security. A sound requirements process to determine whether a national security position requires access to classified information is needed to safeguard classified data and manage costs. The DNI reported that more than 4.9 million federal government and contractor employees held or were eligible to hold a security clearance in 2012. GAO has reported that the federal government spent over $1 billion to conduct background investigations (in support of security clearances and suitability determinations--the consideration of character and conduct for federal employment) in fiscal year 2011. This testimony addresses policies and procedures executive branch agencies use when (1) first determining whether federal civilian positions require a security clearance and (2) periodically reviewing and revising or validating existing federal civilian position security clearance requirements. This testimony is based on a July 2012 GAO report (GAO-12-800), in which GAO (1) reviewed relevant federal guidance and processes, (2) examined agency personnel security clearance policies, (3) obtained and analyzed an OPM tool used for position designation, and (4) met with officials from ODNI and OPM because of their Directors' assigned roles as Security and Suitability Executive Agents. Because DOD and DHS grant the most security clearances, that report focused on the security clearance requirements of federal civilian positions within those agencies. In July 2012, GAO reported that the Director of National Intelligence (DNI), as Security Executive Agent, had not provided executive branch agencies clearly defined policy and procedures to consistently determine if a position requires a personnel security clearance. Absent this guidance, agencies are using an Office of Personnel Management (OPM) position designation tool to determine the sensitivity and risk levels of civilian positions which, in turn, inform the type of investigation needed. OPM audits, however, found inconsistency in these position designations, and some agencies described problems implementing OPM's tool. For example, in an April 2012 audit OPM assessed the sensitivity levels of 39 positions, and its designations differed from the agency in 26 positions. Problems exist, in part, because OPM and the Office of the Director of National Intelligence (ODNI) did not collaborate on the development of this tool, and because their respective roles for suitability and security clearance reform are still evolving. As a result, to help determine the proper designation, GAO recommended that the DNI, in coordination with the Director of OPM, issue clearly defined policy and procedures for federal agencies to follow when determining if federal civilian positions require a security clearance. The DNI concurred with this recommendation. In May 2013, the DNI and OPM jointly drafted a proposed revision to the federal regulation on position designation which, if finalized in its current form, would provide additional requirements and examples of position duties at each sensitivity level. GAO also recommended that once those policies and procedures are in place, the DNI and the Director of OPM, in their roles as Executive Agents, collaborate to revise the position designation tool to reflect the new guidance. ODNI and OPM concurred with this recommendation and recently told GAO that they are revising the tool. GAO also reported in July 2012 that the DNI had not established guidance to require agencies to periodically review and revise or validate existing federal civilian position designations. GAO reported that Department of Defense (DOD) and Department of Homeland Security (DHS) component officials were aware of the requirement to keep the number of security clearances to a minimum, but were not always required to conduct periodic reviews and validations of the security clearance needs of existing positions. GAO found that without such a requirement, executive branch agencies may be hiring and budgeting for initial and periodic security clearance investigations using position descriptions and security clearance requirements that do not reflect current national security needs. Further, since reviews are not done consistently, executive branch agencies cannot have assurances that they are keeping the number of positions that require security clearances to a minimum. Therefore, GAO recommended in July 2012 that the DNI, in coordination with the Director of OPM, issue guidance to require executive branch agencies to periodically review and revise or validate the designation of all federal civilian positions. As of October 2013, ODNI and OPM are finalizing revisions to the federal regulation on position designation. While the proposed regulation requires agencies to conduct a one-time reassessment of position designation within 24 months of the final rule's effective date, it does not require a periodic reassessment of positions' need for access to classified information. GAO continues to believe that periodic reassessment is important.
For many years, HUD has been the subject of sustained criticism for management and oversight weaknesses that have made it vulnerable to fraud, waste, abuse, and mismanagement. In 1994, we designated all of HUD’s programs as high risk because of four long-standing management deficiencies: weak internal controls; inadequate information and financial management systems; an ineffective organizational structure, including a fundamental lack of management accountability and responsibility; and an insufficient mix of staff with the proper skills. HUD undertook reorganization and downsizing efforts in 1993 and 1994; and its 2020 Management Reform Plan that was announced in 1997, was the effort intended to finally resolve its managerial and operational deficiencies, among other things. HUD also said one of the purposes of its plan was to ensure HUD’s relevance and effectiveness into the twenty-first century. HUD’s 2020 Management Reform Plan was a complex and wide-ranging plan to change the negative perception of the agency by updating its mission and focusing its energy and resources on eliminating fraud, waste, and abuse in its programs. The reform plan presented two interrelated missions for HUD: (1) empower people and communities to improve themselves and succeed in the modern economy, and (2) restore public trust by achieving and demonstrating competence. With these two missions, HUD’s goals were to become more collaborative with its partners; move from process-oriented activities to an emphasis on performance and product delivery; and develop a culture within HUD of zero tolerance for waste, fraud, and abuse. As part of the 2020 plan, HUD was to refocus and retrain its staff to ensure it had the skills and resources where needed. HUD planned to reduce staffing from 10,500 at the end of fiscal year 1996 to 7,500 by fiscal year 2002 through buyouts, attrition, and outplacement services. However, we found that the staffing target was not based on a systematic workload analysis, and we questioned whether HUD would have the capacity to carry out its responsibilities once the reforms were in place. HUD reduced staffing to about 9,000 full-time positions by March 1998, when the downsizing effort was terminated. During fiscal year 1999, HUD substantially completed its reorganization under the 2020 Management Reform Plan. In September 2000, we testified on HUD’s progress in addressing its major management challenges as it tried to transform itself from a federal agency whose major programs were designated “high risk.” In January 2001, we recognized that HUD’s top management had given high priority to implementing the 2020 Management Reform Plan. Considering HUD’s progress toward improving its operations through the management reform plan and consistent with our criteria for determining high risk, we reduced the number of programs deemed to be high risk from all HUD programs to two of its major program areas—single-family mortgage insurance and rental housing assistance. In October 2001, we reported that HUD had some successes in implementing its major 2020 management reforms, but we also identified challenges that remain. We reported that some initiatives, such as consolidating and streamlining operations in new centers, had produced results; other efforts, such as improving efficiency and accountability, had been hampered by inefficient distribution of workload and other issues. Overall, we identified strategic human capital management—of which workforce planning, recruiting, and hiring are significant component—-as the most pressing management challenge facing HUD. Concerned about HUD’s approach to using staff, Congress asked the National Academy of Public Administration (NAPA) to evaluate HUD’s ability to develop staffing requirements based on meaningful measures and received a NAPA report on the issue in 1999. NAPA recommended that HUD adopt a management approach that bases staff estimates and allocations on the level of work and the specific location where it is to be performed. HUD made a commitment to implement this recommendation by developing its REAP in consultation with NAPA. In September 2000, the HUD IG expressed concern that the implementation of REAP had not progressed with the urgency that would have been expected for a priority status project. The human capital management challenges that HUD faces are a concern across the federal government. GAO, OMB, and the Office of Personnel Management (OPM) have challenged agencies to acquire and develop staffs whose size, skills, and deployment meet agency needs and to ensure leadership continuity and succession planning. Last year, we added strategic human capital management to our list of high-risk government programs as an area that needs attention to ensure that the national government functions in the most economic, efficient, and effective manner possible. Several of the key challenges we identified were directly related to workforce planning, recruiting, and hiring. Three of the four “human capital cornerstones” that we identified in our Model of Strategic Human Capital Management relate directly to the challenges at HUD that this report examines. These cornerstones are as follows: leadership commitment to human capital management and recognition that people are important enablers of agency performance; strategic human capital planning in which the human capital needs of the organization and new initiatives or refinements to existing human capital approaches are reflected in strategic workforce planning documents, and decisions involving human capital management and its link to agency results are routinely supported by complete, valid, and reliable data; and acquiring, developing, and retaining talent using strategies that are fully integrated with needs identified through strategic and annual planning and that take advantage of appropriate administrative actions available under current laws, rules, and regulations. In 2001, as part of the President’s management agenda for improving the government’s performance, OMB did a baseline evaluation of executive branch agencies’ performance in five major management categories, including human capital management. It scored 26 executive branch agencies as achieving green, yellow, or red levels of performance in each management dimension. For human capital management, no agency received a green status, which would have indicated that it had met all core criteria. Three of the 26 agencies evaluated received a yellow status, indicating the achievement of some, but not all, of the core criteria; and 23 agencies, including HUD, received red status, indicating that that they had one or more major deficiencies in human capital management. HUD currently has a staff of about 9,000 to meet its mission of promoting adequate and affordable housing, economic opportunity, and a suitable living environment free from discrimination. To meet this mission, HUD has outlined the following eight strategic goals: Make the home-buying process less complicated, the paperwork less demanding, and the mortgage process less expensive. Help families move from rental housing to homeownership. Improve the quality of public and assisted housing and provide more choices for their residents. Strengthen and expand faith-based and community partnerships that enhance communities. Effectively address the challenge of homelessness. Embrace high standards of ethics, management, and accountability. Ensure equal opportunity and access to housing. Support community and economic development efforts. HUD’s PIH office plays a major role in administering HUD’s affordable rental housing programs. PIH has identified five activities to meet its mission of ensuring safe, decent, and affordable housing; create opportunities for residents’ self-sufficiency and economic independence; and ensure fiscal integrity by all program participants. These mission- related activities are listed in figure 1. PIH is responsible for oversight of the public housing program that serves about 1.2 million low-income households and the housing voucher program that serves about 1.8 million low-income households. (See fig. 2.) Public housing authorities administer both programs. Because tenants’ rents typically do not cover the cost of operating public housing, PIH administers subsidies, vouchers, and other federal payments to more than 3,000 local public housing authorities. PIH also provides the housing authorities with oversight, monitoring, and technical assistance in planning, developing, and managing public housing, and intervening if problems arise with public housing authorities’ delivery of services. HUD also provides funds to housing authorities for major modernization projects through the Capital Fund Program that PIH administers. Although HUD has started to do workforce planning and has identified the resources required to do its current work, it does not have a comprehensive strategic workforce plan that identifies the knowledge, skills, and abilities it needs to build its workforce for the future. HUD has done a detailed analysis of its potential losses of staff to retirement; but without a complete workforce plan, HUD is not fully prepared to recruit and hire staff to pursue its mission. In the interim, HUD has begun to hire interns whom it hopes can be trained to fill positions that are likely to be affected by upcoming retirements. Workforce planning steps HUD has taken thus far include completion of a detailed analysis of HUD’s potential staff losses due to retirement and the REAP, which estimates the staff needed to handle the current workload in each office. HUD has analyzed data on retirement eligibility by component office, position, and grade level. Among its findings is that by August 2003, half of its workforce in General Schedule (GS) Grades 9 through 15 will be eligible to retire. Figure 3 shows retirement eligibility by grade level. The REAP study reviews staffing levels by component office and the tasks that staff in various job classifications are assigned. On an office-by-office basis, the REAP study looked at the number of staff on board and assigned a staff ceiling—the number of staff needed for that office based on the work the office is currently performing—and then calculated the resources required to do the work. The REAP also provides a framework for periodic validation of the data. Figure 4 compares the REAP estimated needs for major HUD offices with the staff on board as of September 30, 2001. The compilation of data on retirement eligibilities and the REAP study are important first steps for HUD toward strategic human capital planning, but additional workforce planning steps are necessary. REAP has collected valuable information about staff levels and workload, but HUD has not done a comprehensive strategic workforce plan that includes an analysis of successes and shortcomings of existing human capital approaches; work that staff should be doing by thinking broadly of how the mission should change over the next decade; knowledge, skills, and abilities needed by staff to do this work; the capabilities of current staff; gaps in skills, competencies, and development needs and the links between strategies for filling these gaps and mission accomplishment; recruiting and hiring requirements necessary to fill the gaps; and the resources required and milestones for implementation. In its 2001 baseline evaluation of HUD’s human capital management, completed as part of the President’s management agenda for improving the government’s performance, OMB identified the following deficiencies at HUD: skill gap deficiencies across the department; HUD’s inability to sustain a high-performing workforce that is continually improving in productivity; strategically using existing personnel flexibilities, tools, and technology; and implementing succession planning; and human capital that is not aligned to support HUD’s mission, goals, and organizational objectives. In response, HUD issued a human capital strategic management plan in February 2002 that summarizes its plans to address the deficiencies OMB identified. The plan focused on specific goals, including reducing the number of HUD managers and supervisors and GS 14 and 15 positions; expanding personnel flexibilities, such as transit subsidies and telecommuting; and providing employee training and development to fill skill gaps. However, as of June 2002, the plan was not comprehensive enough to fully address the deficiencies outlined by OMB or the broader elements of workforce planning that we have endorsed that would involve looking carefully at what work staff should be doing now and in the future, planning for training and other staff development, and recruiting and hiring to build the workforce needed to accomplish its mission in the future. Without a comprehensive strategic workforce plan, HUD is not fully prepared to recruit and hire staff to pursue its mission. We have noted that federal agencies faced with growing retirement eligibilities may have difficulty replacing the loss of skilled and experienced staff. We found that high-performing organizations address this human capital challenge by identifying their current and future needs—including the appropriate number of employees, the key competencies for mission accomplishment, and the appropriate deployment of staff across the organization—and then create strategies for identifying and filling the gaps. According to HUD officials, in light of the pending retirements, HUD is faced with a need for a large-scale recruiting and hiring effort because it has done little outside hiring in more than 10 years. Some vacant positions have gone unfilled; others have been filled through lateral transfers, promotions, or the upward mobility of administrative staff into professional positions. Said one manager, “all we are doing is stealing from one another.” As a first step in the recruiting and hiring effort, in April 2001, the Human Resource Office proposed a strategy for a HUD intern program that would recruit interns at experience levels ranging from some high school to completion of graduate or professional degrees. The program is designed to bring on new staff at support or entry levels (GS 5, 7, 9, and 11 for legal interns)—current students or people who have earned high school, college, graduate, or professional degrees that qualify them for entry-level positions. According to HUD officials, the internship program is a way to begin bringing new staff into HUD who could be trained to take over higher level positions as retirements occur. The largest component of the program is the HUD career internship program. Candidates who perform successfully for 2 years as HUD career interns, completing rotations in various parts of the organization, will be offered career professional positions with HUD. An official said that no HUD career interns were hired in fiscal year 2001, its first year of inception. However, the program is in full operation this year. The official said HUD hopes to hire 140 HUD career interns and up to 60 interns in other components of the program by the end of fiscal year 2002. As of June 2002, 64 interns had been hired or accepted offers from HUD. The HUD internship program may be a good long-term approach for HUD as interns are converted to permanent positions and move up the career ladder. However, it does not help HUD to bring on board midcareer level employees, although its demographic analysis shows the greatest retirement eligibility is for employees in grades 13-15. (See fig. 3.) A Partnership for Public Service report in February 2002 looked at midcareer retirements and recruiting strategies government wide. It found that “the impending wave of federal employee retirements will have a disproportionately large impact on the mid-career ranks (GS Grades 12– 15) in government,” and that “after a decade of downsizing in the federal workforce, there will likely be an insufficient number of well-qualified internal candidates to replace the retirees.” On the basis of these findings, the Partnership for Public Service recommended that the federal government expand its midlevel hiring practices to include nonfederal candidates more frequently and suggested strategies for doing so, including advertising federal jobs and their benefits more broadly to targeted audiences and removing barriers to the hiring process that unnecessarily limit vacancies to current federal employees. In assessing how they believe workforce planning issues affect PIH’s ability to meet its mission, PIH managers and staff we interviewed reported that the lack of a comprehensive workforce plan makes it difficult for them to accomplish several PIH mission-related activities and provide service to their customers. The workforce planning issue of greatest concern for these PIH managers and staff is staffing shortages. The staffing shortages are exacerbated by skill gaps and uncertainties about what work should be done and the best mix of staff knowledge, skills, and abilities to do it. Directors of several public housing and Native American field offices said that staffing shortages prevent them from providing the level of oversight and technical assistance that the housing authorities need. As shown in figure 5, the field offices were, as of September 2001, staffed at less than 90 percent of the REAP-recommended staffing levels. As a result of these staffing shortages, the directors said that they are not able to accomplish PIH’s goals of providing effective oversight and technical assistance; acting as an agent of change; and forming problem-solving partnerships with its clients, residents, communities, and local government leadership. (See fig. 1.) Even with staffing shortages, the field office directors we interviewed said that they were meeting the goal of using risk assessment techniques to focus oversight efforts. In June 2002, PIH officials said that some new hiring in field offices had moved the numbers of staff on board closer to REAP-recommended ceilings. We received the following comments from directors of a public and a Native American housing field office on how staffing shortages sometimes had a negative impact on their ability to contribute to PIH’s goals: We never have enough time to do all of the technical assistance that needs to be done. We are responsible for providing oversight and technical assistance to 38 public housing authorities, including small offices that require greater assistance than the larger, better-staffed and equipped offices. We generally visit about 25 public housing authorities a year to conduct oversight reviews and provide technical assistance. We used to have a set cycle on which all of our housing authorities received visits, but current workload and staffing levels do not allow the time. Staff we interviewed in field offices and centers provided specific examples of work that they could not complete or complete in a timely manner because of staffing shortages. The work included prompt response to correspondence from customers that required research of laws and regulations, writing program regulations and guidance, tracking audit findings to ensure that corrective actions were taken by housing authorities, and closing out files on completed projects. One staff member who was hired to help meet the goal of building community partnerships with active outreach efforts said he had been used instead “to do whatever needs doing the most at the moment, including information systems management, managing grants applications, and doing compliance reviews.” A grants manager described the impact of staffing shortages on her workload and her customers as follows: When tribal housing office staff call with questions, I sometimes only have enough time to refer them to a handbook page to read. As a result, the plans submitted to us need more rework than they would have if we could have spent the time to be more helpful on the front end. Staffing shortages and workload imbalances have prevented us from having the chance to really improve customers’ operations. Six of the seven field office and center managers we interviewed agreed that the workloads in their offices were much more or somewhat more than could be handled at current staffing levels. Twenty of the 34 professional staff we interviewed at PIH locations around the country described their workloads as somewhat or much more than they could handle during normal business hours. Fourteen of the 18 public housing revitalization specialists and Office of Native American Programs grants management and evaluation specialists—the PIH staff who are first-line contacts with public housing authority staff—described their workloads as somewhat or much more than they could handle. Two of these staff said that they were too new to their positions to assess the workload, and two staff said the workload was about right. Three directors of public housing and Native American program field offices said that they have skill gaps in their offices that exacerbate the staffing shortages they are experiencing. Among the areas where they said expertise is lacking are facilities management; demolitions; real estate development; and financing, particularly mixed financing using public and private funding to develop housing. One director noted “We do not have a level of expertise here that could be defined as ‘highly skilled.’ I would say that my staff has about three-fourths of the knowledge we need.” Moreover, most of the field office directors we interviewed said that they expect the skill gaps to worsen over the next several years because of retirements of knowledgeable staff. Almost half of all PIH staff and over half of PIH staff in such positions as public housing revitalization specialist, financial analyst, and Native American program administrator are projected to be eligible to retire by August 2003. The following are comments we received from managers and staff in two field offices: The youngest professional staff person here is 48 years old, and the average age is 52. Almost all of our staff will be eligible to retire in the next 3 to 5 years. Fourteen of our 31 staff could retire within 5 years. The impact could be horrible, in terms both of the number of bodies to do the work and the brain drain of knowledge, skills, and abilities that take years to develop. It takes a long time to become good at interacting effectively with our tribal communities. Interviews with managers and staff of PIH offices also identified uncertainties about what work should be done and the best mix of staff knowledge, skills, and abilities to do it. For example, all of the directors of public housing and Native American program field offices we interviewed said that they used risk assessment techniques to focus oversight. However, some managers and staff in field offices said they were uncertain about the appropriate level of monitoring and technical assistance to provide to their customers. PIH offices had no standard methods of assigning levels of technical assistance and oversight based on risk. One manager noted that each field office develops an annual monitoring plan based on projections of what can be accomplished with the staff on board. Although practical considerations require this type of planning, more comprehensive, futuristic workforce planning discussions are necessary to deal with questions on the desirable level of monitoring and technical assistance to ensure that housing authorities use HUD funds to provide the best possible service to public housing residents and other customers. Strategic workforce planning is a major challenge for HUD. We have found that high-performing organizations address this human capital challenge by identifying their current and future needs—including the appropriate number of employees, the key competencies for mission accomplishment, and the appropriate deployment of staff across the organization—and then create strategies for identifying and filling the gaps. Because HUD has not addressed all of these elements of strategic workforce planning, it does not know what work its staff should be doing now and in the future to meet its strategic goals; what knowledge, skills, and abilities its staff needs to do this work; the capabilities of the current staff; what gaps exist in skills, competencies, and developmental needs; and what its recruitment and hiring strategy should be. Without a comprehensive workforce plan, HUD is not fully prepared to recruit and hire the people it needs to pursue its mission—an issue made critical by its estimate that about half of its professional staff and nearly 60 percent if its highest-graded GS employees will be eligible to retire by August 2003. We are recommending that the Secretary of HUD develop a comprehensive strategic workforce plan that is aligned with its overall strategic plan and identifies the knowledge, skills, and abilities HUD needs and the actions that it plans to take to build its workforce for the future. In commenting on a draft of this report, the HUD Assistant Secretary for Administration said that HUD recognizes the need for additional workforce planning, as we recommended, and did not disagree with our report. She also provided information on several HUD efforts to address the elements of a comprehensive workforce plan that we discussed in our report. For example, she said that HUD has established a Human Capital Management Executive Steering Committee, consisting of representatives from all HUD program areas, to develop a five-year strategic plan to focus on human capital issues. She also said that the HUD Training Academy started several initiatives to support workforce planning, including leadership and development training for new supervisors, aspiring supervisors, and managers. In addition, according to the Assistant Secretary for Administration, HUD is in the process of completing an effort to redeploy field office staff so they are in positions where their skills can best be used to meet program needs. HUD’s comments are reprinted in appendix II. To determine how HUD uses workforce planning to guide recruiting and hiring, we analyzed documentation and interviewed officials. Our documentation analyses included our prior reports; NAPA studies; REAP results; HUD strategic plans, budget justifications, and workforce planning reports; and HUD IG reports. We interviewed headquarters PIH and Human Resource officials. To determine how PIH managers and staff believe workforce planning issues affect PIH’s ability to meet its strategic goals, we analyzed strategic planning documents and interviewed PIH managers at HUD headquarters. We pretested and conducted structured interviews with managers and staff at four PIH field locations: public housing offices in Philadelphia, PA; Jacksonville, FL; and San Francisco, CA; and an office of Native American programs in Phoenix, AZ. We also visited several PIH-directed centers that HUD established beginning in 1997 as part of its 2020 management reform effort to consolidate operations that had previously been done in HUD field offices. Centers we visited were the Grants Management and Financial Management Centers in Washington, D.C.; and a Troubled Agency Recovery Center in Cleveland, OH. In consultation with PIH’s acting directors of field operations and Native American programs, we judgmentally selected the offices we visited to include a mix of geographical locations, office sizes, and type of work performed in consultation with PIH’s acting directors of field operations and Native American programs. At each of the locations, we interviewed professional employees who were from six professional job classifications and were available to talk with us. The results of our interviews cannot be generalized to PIH overall. Table 1 lists the professional positions from which we selected staff to interview in PIH field offices and centers and describes some of their duties. We did our work between September 2001 and July 2002 in accordance with generally accepted government auditing standards. As arranged with your office, we are sending copies of this report to the Secretary, Department of Housing and Urban Development. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please call me at (202) 512-2834. Key contacts and major contributors to this report are listed in appendix III. In addition to those individuals named above, Deborah Knorr and Gretchen Pattison made key contributions to this report.
Looming retirements during the next 5 years at the Department of Housing and Urban Development (HUD) have brought the need for workforce planning to the forefront. HUD has done some workforce planning and has determined how many staff it needs to meet its current workload, but it does not have a comprehensive strategic workforce plan to guide its recruiting, hiring, and other key human capital efforts. Workforce planning steps taken include a detailed analysis of HUD's potential staff losses and completion of HUD's resource estimation and allocation process, which estimates the staff needed to handle the current workload in each office. Some of the Public and Indian Housing (PIH) managers and staff reported that the lack of workforce planning makes it difficult to accomplish mission-related activities and provide customer service. The issue of greatest concern for PIH managers and staff is the staffing shortage. Because HUD lacks a comprehensive strategic workforce plan, some PIH managers and staff were uncertain about what work should be done and the best mix of staff knowledge, skills, and abilities to do it.
According to FCC, as of June 2001, just over 86 percent of television households purchased a subscription television service, as opposed to relying solely on free, over-the-air broadcast television. Of these subscription households, 78 percent received their service from a franchised cable operator while 18 percent received their service from a DBS company. DBS historically has been popular in rural areas where cable service is unavailable to many households. Until a few years ago, there was a significant difference between the programming packages of cable and DBS: cable systems could offer the local broadcast channels, while DBS companies generally could not because of technological limitations and legal constraints. In 1999, following advances in satellite technologies, Congress enacted the Satellite Home Viewer Improvement Act to, among other things, allow DBS companies to offer local broadcast channels via satellite. Today, EchoStar and DirecTV, the two primary providers of DBS services, each offer local broadcast channels to their subscribers in about 45 of the 210 television markets in the United States. DBS and cable also compete for subscribers to their broadband Internet access services. Many cable companies have recently upgraded their cable systems and now offer a selection of digital services, including cable modem Internet access. Cable modem service is generally considered one of the fastest methods for home Internet access and is currently the most popular broadband service. DirecTV offers a two-way satellite Internet access service called DirecWay. Few consumers subscribe to the current satellite Internet service, although future satellite Internet access technologies are expected to be faster and more competitive with cable modems. Each DBS company is inherently limited in the number of programming channels and other services it can provide by the technical capacity constraints of its satellite fleet. Each satellite contains a certain number of transponders, or relay equipment, and each transponder can transmit a limited amount of information (i.e., video, audio, and data). DBS companies have increased the capacity of their satellites through various technologies, such as digital compression and frequency reuse. Compression technologies conserve capacity by reducing the number of bits required to send digital information. For example, when transmitting video programming, compression eliminates the transmission of identical bits from frame to frame. Frequency reuse allows different programming to be transmitted over the same frequencies in different geographic areas. This is accomplished through the use of “spot beam” satellites that, rather than transmitting a signal nationwide, transmit to specific cities or other smaller geographic regions. As long as spot beams using the same frequency are at least a certain distance apart, interference among signals is avoided. Both digital compression and frequency reuse technologies have steadily improved since the launch of DBS in 1994. Satellite companies are also constrained by the number of orbital slots available for DBS services. Currently, DirecTV and EchoStar have the rights to all of the allocated frequencies at the three full-CONUS (i.e., the satellite footprint covers the entire contiguous United States) DBS orbital slots. In October 2001, the two DBS companies signed an agreement wherein EchoStar would merge with DirecTV. One of the main arguments the companies put forth in support of the merger is that it would enable them to offer local broadcast channels to subscribers in all 210 television markets, something the companies say they cannot do independently. The companies have stated that their main competitor is cable—not each other—and that the ability to carry all local broadcast channels will make DBS a stronger competitor to cable systems. Opponents of the merger have stated that the companies could individually offer many more, if not all, local broadcast channels if they chose to do so and that the merger would create a monopoly in DBS service provision, which is of particular concern to rural consumers who do not have access to a cable system. The proposed merger is under review by Justice. FCC recently announced that it had declined to approve the proposed merger, although DirectTV and EchoStar have 30 days to file an amended application and to file a petition to delay the hearing. Congress has held several hearings on the matter. In our random telephone survey of consumers, we asked all of our survey respondents if, when thinking about purchasing television programming, the availability of cable modem Internet service would make them more likely to choose cable video service over satellite video service (see fig. 1). Fifty-one percent of those responding said “not more likely” while 16 percent said “much more likely.” We also asked all of our survey respondents (excluding those few with satellite Internet access) if they had considered purchasing Internet service through a satellite provider; 88 percent said they had not. As shown in figure 1, almost one-third of respondents said that the availability of cable modem service was “moderately more likely” or “much more likely” to make them choose cable over satellite service. We also found the following: Respondents with higher household incomes were more likely to say that the availability of cable modem Internet access would influence their decision to buy cable video service. Respondents who were younger (from 18 to 34 years old) were more likely than older respondents to say that the availability of cable modem Internet access would influence their decision to buy cable video service. In addition to asking all respondents about the impact of Internet access on their video service decisions, we asked respondents who had begun purchasing or considered purchasing either cable or DBS service within the past 2 years to rate various reasons why they considered or purchased these services (see fig. 2). Of those who began purchasing or considered purchasing cable, 61 percent said the availability of cable modem service was “not a reason” in their consideration or purchase of cable video programming services, although approximately one-fifth said cable modem service was a “major reason” for considering cable. The responses from those who had begun purchasing or considered purchasing DBS within the past 2 years were similar: 64 percent said satellite Internet access service was not a reason for consideration of DBS video services while 12 percent said it was a major reason. Other factors appeared to be important in consumers’ consideration of video providers. Fifty-seven percent of cable respondents and 61 percent of DBS respondents said that a major reason for selecting or considering a video services provider was because they wanted more channels than they were receiving. Those who recently selected or considered cable also rated highly the ability to get local broadcast channels from the cable company and a better signal quality. Those who recently selected or considered DBS often reported that they considered satellite service because they believed DBS was cheaper than cable and because DBS offered special rates or promotions. According to our econometric model, the provision of local broadcast channels by DBS companies is associated with significantly higher DBS penetration rates. Specifically, our model results indicate that in cable franchise areas where consumers can receive local channels from both DBS providers, the DBS penetration rate is approximately 32 percent higher than in areas where consumers cannot receive local channels via satellite. Thus, in areas where the DBS companies offer local channels, it appears that DBS is more effectively able to compete for subscribers. In addition to using an econometric model to study the competitive impact of DBS provision of local channels, we also examined the growth in the number of DBS subscribers between 1998 and 2001. This analysis was based on the percentage change in the number of DBS subscribers in almost all zip codes throughout the country. We found that in areas where both DBS companies introduced local broadcast channels, DBS subscribership grew by approximately 210 percent over this time period, while in areas where local channels were not available, it grew by 174 percent in the same time frame. Our model results do not indicate that the provision of local broadcast channels by DBS companies is associated with lower cable prices. In contrast, the presence of a second cable franchise (known as an overbuilder) does appear to constrain cable prices. In franchise areas with a second cable provider, cable prices are approximately 17 percent lower than in comparable areas without a second cable provider. Finally, we found that the provision of local broadcast channels by DBS companies is associated with nonprice competition. In areas where both DBS companies provide local channels, our model results indicate that cable companies offer subscribers approximately 6 percent more channels. This result indicates that cable companies are responding to DBS provision of local channels by improving their quality, as reflected by the greater number of channels. In our July 2000 report, we also found that cable companies responded to DBS competition by increasing the number of channels. In 1999, the Satellite Home Viewer Improvement Act provided DBS companies with the legal right to provide local broadcast station programming. To date, DirecTV and EchoStar have each introduced local broadcast service in about 45 markets, although DirecTV plans to offer local channels in about 70 markets and EchoStar plans to offer local channels in about 50 markets. However, providing local channels uses a satellite’s transmission capacity—a limited resource on each satellite. Thus, there is an important trade-off that DBS companies face in deciding how many markets to target for local service. As DBS companies roll out local channels in more markets, satellite capacity that could otherwise have been used to provide services to all subscribers (such as national cable networks or interactive services) would be used to offer local channels to select groups of subscribers. The two DBS companies have stated that one of the reasons they want to merge is to engender economies in the provision of local broadcast channels. In particular, the companies have stated that if they merge, they will, as a combined entity, have sufficient capacity to provide local broadcast programming in all 210 television markets and add new services, while continuing to provide their current number of cable programming channels. Several opponents of the merger contend that each of the DBS companies on its own has sufficient capacity to expand the provision of local broadcast channels into even more, if not all, television markets. Key assumptions about the technical capabilities of the DBS companies’ satellite fleets varied among those with whom we spoke. Opponents of the merger made assumptions about key technical factors—such as frequency reuse capability and advances in digital compression technologies—that were optimistic. The DBS companies held more conservative views about the technical capabilities of their fleets today and considered some possible enhancements to be based on technologies that are not currently available to them nor proven in terms of quality. We found that some of the assumptions of the merger opponents focused on potential capabilities that could not be readily incorporated into satellites already deployed and that would involve substantial replacement of consumers’ DBS equipment. Our examination of various documents related to the two DBS companies’ satellite capacity indicates that—given current technologies and deployed assets—neither company would individually be able to offer all of the local broadcast channels in all 210 television markets while simultaneously maintaining a competitive national subscription television service. Were either company to offer local channels in all 210 markets today, it would have to use much more of its current capacity for local channels, thus reducing its ability to offer the large numbers of national cable networks, pay-per-view channels, and other services that each company currently provides. This would compromise the competitiveness of a DBS company with cable. In the long term, however, with the launch of additional satellites and the deployment of or transition to new technologies, both DBS companies could choose to provide local channels in more television markets than they currently plan to serve. Of course, these decisions would involve weighing the cost of such satellites or new technologies against the number of projected additional subscribers and other benefits that increased local broadcast offerings would bring to DBS. That is, the decision of whether to introduce more local channels is essentially a business decision. Whether the benefits would outweigh the costs for the individual companies to roll out local channels in all 210 television markets is not clear. Finally, it is also not clear how the transition of all local broadcast stations from analog to digital television (DTV) technologies will affect the offering of local broadcast channels by DBS companies. The broadcast DTV transition is under way and will eventually culminate in the discontinuation of all analog broadcast signals. The DTV transition allows broadcast stations to provide high definition (HD) television signals—that is, a sharper television picture with roughly twice the lines of resolution of traditional analog pictures. However, even with digital compression technologies, the transmission of HD signals takes up far more satellite capacity than the transmission of traditional analog signals. If many of the roughly 1,600 broadcast stations across the country provide HD signals at the end of the digital transition (when the analog signals have been discontinued), it will take considerably more satellite capacity to provide the signals of the digital stations than it currently takes to provide the signals of the analog stations. However, the DTV transition may take several years, during which time advances in satellite technologies might mitigate this need for increased capacity. Nonetheless, at this time, the DBS companies’ business decisions about local digital broadcast carriage at the completion of the DTV transition is unclear. We provided a draft of this report to FCC and Justice for their review and comment. FCC staff provided minor technical comments that were incorporated as appropriate. Both FCC and Justice declined to comment on the substance of our report due to the merger proceedings. Letters from FCC and Justice are included in appendixes IV and V, respectively. As agreed with your offices, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will provide copies to interested congressional committees; the Assistant Attorney General, Antitrust Division, Department of Justice; the Chairman, FCC; and other interested parties. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-2834 or guerrerop@gao.gov. Key contacts and major contributors to this report are listed in appendix VI. To provide information on the impact of the availability of cable modem Internet access on consumer video service choice, we contracted with Opinion Research Corporation (ORC), a national research firm, to include questions on three of its national telephone surveys. The survey contained a set of 14 questions that asked people about their television and Internet use (e.g., how they access the Internet from their home) as well as questions designed to gauge the importance of receiving Internet service and video service from the same provider. The questions and response options were read to the respondents. A total of 3,000 adults in the continental United States were interviewed between May 23 and June 2, 2002. The population was taken from the contractor’s random-digit-dialing sample of households with telephones, stratified by region. In order to use the survey results to make estimates about the entire population 18 years and older in the continental United States, ORC weighted the responses to represent the characteristics of all adults in the general public according to four variables: age, gender, geographic region, and race. Because our results are from a sample of the population, the resulting estimates have some sampling errors associated with them. Sampling errors are often presented at a certain confidence interval. The percentage estimates we present in this report have a 95 percent confidence interval of plus or minus 5 percentage points or less. The practical difficulties of conducting any survey may introduce nonsampling errors. As in any survey, differences in the wording of questions, the sources of information available to respondents, or the types of people who do not respond can affect results. We took steps to minimize nonsampling errors. For example, we developed our survey questions with the aid of a survey specialist and pretested the survey questions before submitting them to ORC. We developed an econometric model to examine the influence of direct broadcast satellite (DBS) companies’ provision of local broadcast channels, among other factors, on cable prices and the DBS penetration rates in a large sample of cable franchise areas across the country in 2001. In 2000, we developed a similar econometric model to examine the impact of DBS penetration rates on cable prices. In this report, we extended the previous econometric model by adding new variables to account for the recent emergence of local broadcast channels via satellite. In particular, this model sought to determine whether and how two categories of key factors affected cable prices and DBS penetration rates: (1) factors that relate to subscribers’ demand for cable and DBS services and the companies’ costs of providing service and (2) factors that relate to the degree of competition in the market. The availability of local channels via satellite is one variable included in the model that can influence both subscribers’ demand for DBS service and the competitiveness of the market. We discussed the development of our model with the Federal Communications Commission (FCC), the Department of Justice (Justice), and several industry trade groups. There are some important limitations to the interpretation of our model results. Generally, econometric models measure statistical relationships between explanatory factors and the factor to be explained and do not imply causation between these factors. Also, some specific limitations of our model relate to the characteristics of the sample of cable franchise areas chosen by FCC. We performed our statistical analysis on a sample of 722 cable franchise areas included in a yearly survey conducted by FCC. The survey included a sample of “competitive” franchise areas (as defined under statute) and a sample of “noncompetitive” franchise areas, selected within several size classifications (or “strata”). Although FCC conducts the survey annually, different cable franchises report every year because cable franchises are sampled. Since data were not available for every cable franchise for several continuous years, we conducted a cross-sectional analysis, which gave us an observation from 722 different cable franchises at a single point in time. The cross-sectional analysis would not allow us to examine dynamic changes that occur through time, such as the influence of an increasing DBS penetration rate on cable prices. Rather, we were limited to describing the nature of the subscription video market in a single time period, namely 2001. However, certain limited analyses were conducted that incorporated a time-series element. Appendix III contains (1) a complete discussion of the model development, data sources, estimation design, and model results and (2) a table of descriptive statistics for all variables included in the model. The following results are based on the responses to a random telephone survey of 3,000 adults, age 18 and older, in the continental United States. After each question, the number of respondents (n) is noted. Percentages may not add to 100 percent because of rounding. Question 1: What method is currently used for viewing on the main television in your home? (n=3,000) Over the air, through an antenna Direct broadcast satellite, such as DirecTV or EchoStar’s DISH Network, for all your channels Direct broadcast satellite for all channels except local broadcast channels Big dish, C-band satellite You don’t own a television Other (Specify) [If respondent answered “you don’t own a television,” “other,” or “don’t know,” the survey was ended for that respondent.] Question 2: [Only asked of those who answered “over the air,” “direct broadcast satellite,” or “C-band satellite” in question 1.] Have you considered purchasing cable service for your main television viewing within the past 2 years? (n=1,018) Question 3: Did you begin subscribing to your current cable provider within the past 2 years? (n=1,854) Question 4: What method did you previously use for your main television viewing? (n=555) Over the air, through an antenna Other (Specify) Question 5: [Only asked of those who answered “yes” to question 2 or question 3.] I am now going to read you a list of reasons that someone may think of when purchasing cable service. For each of these, please tell me if it was a major reason, a minor reason, or not a reason in why you cable. Again, please rate each of these as a major reason, a minor reason, or not a reason. Question 5a: Because your area cable company offered special rates or other promotions, such as free installation or 3 months free. (n=785) Question 5b: Because you wanted more channels than you were receiving. (n=785) Question 5c: Because you wanted to purchase special features (like sports packages, pay-per-view, or movie options). (n=785) Question 5d: Because you heard or saw that the picture and audio quality with cable was better than you were receiving. (n=785) Question 5e: Because you were interested in receiving high definition television channels. (n=785) Question 5f: Because you thought that cable was cheaper than satellite service. (n=785) Question 5g: Because you thought cable offered better customer service quality than you were receiving. (n=785) Question 5h: Because you were interested in purchasing your Internet service through a cable provider and wanted to purchase television service from the same company. (n=785) Question 5i: Because you wanted to get both your local broadcast channels and cable channels from the same company. (n=785) Question 5j: Because family and friends recommended cable. (n=785) Question 5k: Because cable was the only television option available to you other than over-the-air broadcasting. (n=785) Question 6: [Only asked of those who answered “over the air,” “cable,” or “C-band satellite” in question 1.] Have you considered purchasing direct satellite service, such as DirecTV or EchoStar’s DISH Network, within the past 2 years? (n=2,375) Question 7: [Only asked of those who answered “direct broadcast satellite” in question 1.] Did you begin subscribing to your current direct satellite service within the past 2 years? (n=497) Question 8: What method did you previously use for your main television viewing? (n=241) Over the air, through an antenna A big dish, C-band satellite Other (Specify) Question 9: [Only asked of those who answered “yes” to question 6 or question 7.] I am now going to read you a list of reasons that someone may think of when purchasing satellite service. For each of these, please tell me if it was a major reason, a minor reason, or not a reason in why you satellite service. Again, please rate each of these as a major reason, a minor reason, or not a reason. Question 9a: Because the satellite company offered special rates or other promotions, such as free installation or 3 months free. (n=854) Question 9b: Because you wanted more channels than you were receiving. (n=854) Question 9c: Because the satellite company added local broadcast channels, such as ABC or FOX, in your area. (n=854) Question 9d: Because you wanted to purchase special features (like sports packages, pay-per-view, or movie options). (n=854) Question 9e: Because you heard or saw that the picture and audio quality with satellite were better than you were receiving. (n=854) Question 9f: Because you were interested in receiving high definition television channels. (n=854) Question 9g: Because you thought that satellite was cheaper than cable. (n=854) Question 9h: Because you thought that satellite offered better customer service quality than you were receiving. (n=854) Question 9i: Because you were interested in purchasing your Internet service through a satellite company and wanted to purchase your television service from the same company. (n=854) Question 9j: Because family and friends recommended satellite. (n=854) Question 9k: Because satellite was the only television option available to you other than over-the-air broadcasting. (n=854) Question 10: [Only asked of those who answered “yes” to question 6 or question 7.] When you considered purchasing direct satellite service, which service did you consider? (n=854) Question 11: How do you currently access the Internet in your home? (If you use more than one method, please tell me which one you use most.) (n=2,872) You have a computer, but don’t access the Internet You don’t have a computer Other (Specify) Question 12: [Not asked of those who answered “cable modem service” in question 11.] Does your area cable provider offer Internet access through a cable modem service? (n=2,583) Question 13: When thinking about purchasing TV programming, would the availability of cable modem Internet access make you more likely to choose cable service over satellite service? (n=2,872) Question 14: [Not asked of those who answered “satellite Internet service” in question 11.] Have you considered purchasing Internet access service through a satellite provider? (n=2,857) This appendix describes our econometric model of cable-satellite competition. Specifically, we discuss (1) the conceptual development of the model, (2) the data sources used for the model, (3) the merger of various data sources into a single data set, (4) the descriptive statistics for variables included in the model, (5) the estimation methodology and results, and (6) alternative specifications. In response to a congressional request, we developed an econometric model to examine the influence of satellite companies’ provision of local broadcast channels, along with other factors, on cable prices and DBS penetration rates in a large sample of cable franchise areas in 2001. This request represented a follow-up to a previous report that we issued which analyzed the impact of DBS penetration rates on cable prices. Relying on our previous model, the existing empirical literature, and our assessment of the current subscription video marketplace, we developed a model that included a variety of explanatory variables that were included in our previous model, as well as other models, but that also extended those analyses by adding new variables to account for the recent provision of local broadcast channels by DBS companies as an important factor in competition between cable and DBS companies. To examine the influence of the DBS companies’ provision of local channels on cable prices and DBS penetration rates, we employed a model that is based on the subscription video market, rather than on the narrower market for cable television. In 2001, the national market share of cable systems (as measured by subscribership) in what we call the subscription video market was about 78 percent, and the share of the DBS providers was about 18 percent. The remaining 4 percent of subscription television households obtained service through other means such as terrestrial wireless systems, satellite master antenna television systems (usually used in apartment buildings or other multiple-dwelling units), open video systems, and large “C-band” home satellite dishes. Cable providers and satellite providers can be regarded as “differentiated,” not so much because they use different technologies but because the services they provide are perceived as different by subscribers and because these varied providers face different laws and regulations that influence their cost structures as well as the type of product they provide. For example, in 2001, satellite subscribers in only 42 television markets could receive local broadcast signals from either DBS provider. Also, cable companies must pay local franchise fees and are required to provide capacity for public, educational, and government channels. In sum, cable and satellite providers are differentiated in consumers’ perception, in their legal context, and in their product offerings. In our model, cable prices and DBS penetration rates will depend broadly on the demand and cost conditions affecting both the cable and noncable providers of subscription video services. With the passage of the Satellite Home Viewer Improvement Act, DBS providers were granted authority to distribute local broadcast television channels in the broadcast stations’ local markets, perhaps allowing DBS providers to compete more fully with cable companies. To measure the influence of local channels, we used a variable that indicates whether local channels were available from both DBS providers in each franchise area. Estimating the influence of DBS companies’ provision of local channels on cable prices and DBS penetration rates is complicated by the possibility that the DBS penetration rate in an area is itself determined, in part, by the cable price in that area and that the cable price is determined, in part, by the DBS penetration rate. One statistical method applicable in this situation is to estimate a system of structural equations in which certain variables that may be simultaneously determined are estimated jointly. In our previous report, we estimated a four-equation structural model in which cable prices, the number of cable subscribers, the number of cable channels, and the DBS penetration rate were jointly determined. We modify this four-equation structural model to incorporate the influence of local channels via satellite on cable prices and DBS penetration rates. One implication of this estimation technique is that the estimated effects we report for the influence of DBS companies’ provision of local channels on cable prices and DBS penetration rates must be interpreted as direct effects on price and penetration. At the same time, there are indirect effects of local channels on cable prices and DBS penetration rates wherein these effects on cable prices and DBS penetration rates work through their effects on other endogenous variables. For instance, a DBS company’s provision of local channels may influence a cable operator’s decision about the number of channels to include in programming packages, which can, in turn, affect its cable price and the DBS penetration rate. We later present a table with results from reduced-form cable price and DBS penetration rate equations to show how the exogenous variables in the system of equations affect, both directly and indirectly, cable prices and DBS penetration rates. We estimated the following four-equation structural model of the subscription television market: Cable prices are hypothesized to be related to (1) the number of cable channels, (2) the number of cable subscribers, (3) the DBS penetration rate, (4) the DBS companies’ provision of local channels in the franchise area, (5) the size of the television market as measured by the number of television households, (6) horizontal concentration, (7) vertical relationships, (8) the presence of a nonsatellite competitor, (9) regulation, (10) average wages, and (11) population density. The cable price variable used in the model is defined as the total monthly rate charged by a cable franchise to the “typical subscriber,” including the fees paid for the most commonly purchased programming tier and rented equipment (a converter box and remote control). The explanatory variables in the cable price relationship are essentially cost and market structure variables. Number of cable subscribers is hypothesized to be related to (1) cable prices (per channel), (2) the DBS penetration rate, (3) the DBS companies’ provision of local channels in the franchise area, (4) the size of the television market as measured by the number of television households, (5) the number of broadcast channels, (6) urbanization, (7) the age of the cable franchise, (8) the number of homes passed by the cable system, (9) the median income of the local area, and (10) the presence of a nonsatellite competitor. The number of cable subscribers is defined as the number of households in a franchise area that subscribe to the most commonly purchased programming tier. This represents the demand equation for cable services, which depends on rates and other demand-related factors. Number of cable channels is hypothesized to be related to (1) the number of cable subscribers, (2) the DBS penetration rate, (3) the DBS companies’ provision of local channels in the franchise area, (4) the size of the television market as measured by the number of television households, (5) the median income of the local area, (6) cable system capacity in terms of megahertz, (7) the percentage of multiple-dwelling units, (8) vertical relationships, and (9) the presence of a nonsatellite competitor. The number of cable channels is defined as the number of channels included in the most commonly purchased programming tier. The number of cable channels can be thought of as a measure of cable programming quality and is explained by a number of factors that influence the willingness and ability of cable operators to provide high- quality service and consumers’ preference for quality. DBS penetration rate in a television market is hypothesized to be related to (1) cable prices (2) the DBS companies’ provision of local channels in the franchise area, (3) the size of the television market as measured by the number of television households, (4) the age of the cable franchise, (5) the median income of the local area, (6) cable system capacity in terms of megahertz, (7) a dummy variable for areas outside metropolitan areas, (8) the percentage of multiple-dwelling units, (9) the angle—or elevation—at which a satellite dish must be fixed to receive a satellite signal in that area, and (10) the presence of a nonsatellite competitor. The DBS penetration rate variable is defined as the number of DBS subscribers in a franchise area expressed as a proportion of the total number of housing units in the area. As hypothesized, the DBS penetration rate is expected to depend on the prices set by the cable provider as well as on the demand, cost, and regulatory conditions in the subscription video market that directly affect DBS. Many of the explanatory variables appeared in our 2000 report as well as in previous studies of cable prices prepared by others. The explanatory variables included in these studies fall into two general categories: (1) demand and cost factors and (2) market structure and regulatory conditions. Table 1 presents the expected effects of all the explanatory variables in the structural model on cable prices and DBS penetration rates. We required several data elements to build the data set used to estimate this model. The following is a list of our primary data sources: We obtained data on cable prices and service characteristics from a 2001 survey of cable franchises that FCC conducted as part of its mandate to report annually on cable prices. FCC’s survey asked a sample of cable franchises to provide information about a variety of items pertaining to cable prices, service offerings, subscribership, franchise area reach, franchise ownership, and system capacity. We used the survey to define measures of each franchise area’s cable prices, number of subscribers, and number of cable channels as described above. In addition, we used the survey to define variables measuring (1) system megahertz (the capacity of the cable system in megahertz), (2) homes passed by the cable system serving the franchise area and perhaps other franchises in the same area, (3) competitive status—a dummy variable equal to 1 if the franchise faced “nonsatellite” competition from an unaffiliated subscription video company (or “overbuilder”) or from a local telephone company, (4) regulation—a dummy variable equal to 1 if the franchise is subject to rate regulation of its Basic Service Tier, and (5) horizontal concentration—a dummy variable equal to 1 if the franchise is affiliated with 1 of the 10 largest MSOs. From SkyREPORT, we obtained an estimate of DBS subscriber counts as of year-end 2001 for each zip code in the United States. We used this information to calculate the number of DBS subscribers in a cable franchise area, which, when used in conjunction with the number of housing units, was used to define the DBS penetration rate. We used the most recent data from the U.S. Census Bureau to obtain the following demographic information for each franchise area: median household income, proportions of urban and rural populations, housing units accounted for by structures with more than five units (multiple- dwelling units), population density, and nonmetropolitan statistical areas. For average wage, we used year 2000 state estimates for Telecommunications Equipment Installers and Repairers from the Bureau of Labor Statistics’ (BLS) Occupation and Employment Statistics Survey. We used data from BIA MEDIA AccessPro to determine the number of broadcast television stations in each television market. To define the dummy variable indicator of vertical integration, we used information on the corporate affiliations of the franchise operators provided in FCC’s survey. We used this information in conjunction with industrywide information on vertical relationships between cable operators and suppliers of program content gathered by FCC in its 2001 annual video report. We used information from the National Association of Broadcasters to identify in which television markets local channels were available from both DBS companies. From Nielsen Media Research, we acquired information to determine the number of television households in each designated market area (DMA) and to determine in which DMA each cable franchise was located. On the basis of a zip code associated with each cable franchise, we were able to determine the necessary satellite dish elevation for each cable franchise area from information available on the Web pages of DirecTV and EchoStar. The level of observation in our model is a cable franchise. Many of the variables we used to estimate our model, such as each cable franchise’s price, came directly from FCC’s survey of franchises. However, we also created variables for each franchise from information derived from other sources. For example, median income and the extent of multiple-dwelling units were obtained from Census data, and the number of DBS subscribers was provided by SkyREPORT. The assignment of these variables to each franchise required identifying the geographic extent of each franchise area because Census and DBS data are reported within geographic definitions that differ from cable franchise areas. Census data can be obtained at several geographic levels, including communities or counties. Additionally, some information—most notably DBS subscriber counts—is at a zip code level. FCC’s survey and other FCC data on cable franchises contain information on the franchise community name, type (such as city or town), and county, which can be used to link franchises to Census areas. One complicating factor in using community names to assign non-survey-derived information to each franchise is that some cable franchises are in areas, such as unnamed, unincorporated areas, that do not correspond to geographic areas for which Census or other data are readily available. Another complicating factor is that FCC’s 2001 survey did not contain information on the zip codes served by particular franchise areas. We first attempted to determine the geographic area associated with each cable franchise. Our general approach was to combine each franchise’s community name field with an indicator of community type, such as city or town, and then match these names to census place or, alternatively, county subdivision (minor civil division) files. Since many of the franchises in our sample correspond to recognizable local entities—such as cities, towns, and townships—we were able to make the link directly to Census data sources and assign demographic and other census data gathered at the level of the associated community. Of the 722 franchises used in the model, 442 were linked to census place files, and 126 were linked to census county subdivision files. For other franchises, however, the link to Census records was not as direct. For franchises in unincorporated, unnamed areas and those whose franchise areas represent a section of the associated community (which occurs in some large cities), we acquired additional information on the geographic boundaries of the franchise areas. For purposes of assigning demographic and other census data to each of these franchises, we identified a key zip code that we used to link to census data organized at the zip code level. Of the 722 franchises used in the model, 28 were in large cities with multiple franchises, 94 were in unincorporated areas of counties for which we obtained more specific boundary information, and 32 were in unincorporated areas for which we did not obtain more specific boundary information. The satellite subscriber information we obtained was organized by zip code. In order to match these counts to franchises, we determined the zip code or zip codes associated with each franchise. Because zip codes often do not share boundaries with other geographies, one zip code can be associated with more than one cable franchise area. Also, many franchises, particularly larger ones, span many zip codes. Therefore, we needed to identify the zip code or codes in each franchise area as well as the degree to which each of those zip codes is contained in each franchise area to calculate the degree of satellite penetration for each franchise. We accomplished this by using software designed to relate various levels of census geography to one another. For most franchise areas—that is, those that correspond to census places, county subdivisions, or entire counties as well as some of those franchises in multiple-franchise jurisdictions—we were able to use this software to relate census places, county subdivisions, and in some cases, census tracts or whole counties, directly to the zip codes that corresponded to those areas (places, etc.) and to calculate the share of each zip code’s population according to the 2000 Census that was contained in that area. We used these population shares to allocate shares of each zip code’s total DBS subscribers to the relevant franchise area. For some franchise areas in unincorporated areas, we used the zip code or codes we identified as part of our investigation of the geographic extent of these franchises, and we used the software to estimate the proportion of the population in those zip codes living in unincorporated areas and to allocate DBS subscribers on the basis of these population proportions. For some other franchise areas in unincorporated areas, we approximated DBS penetration using population proportions in the unincorporated portions of all zip codes in the relevant counties. We assigned other information to each franchise on the basis of the franchise’s county, state, or the key zip code that we identified. Wage data from BLS were assigned at the state level; nonmetropolitan status, percentage of urban population, and the Nielsen television market of each franchise were assigned at the county level. As part of the process used to match zip codes to franchises, we defined a key zip code for each franchise as that zip code with the largest franchise area population. We used this zip code to assign dish elevation for each franchise. Table 2 provides basic statistical information on all of the variables included in the cable–satellite competition model. We calculated these statistics using all 722 observations in our data set. We employed the Three-Stage Least Squares (3SLS) method to estimate our model. Table 3 includes the estimation results for each of the four structural equations. All of the variables, except dummy variables, are expressed in natural logarithmic form. This means that coefficients can be interpreted as “elasticities”—the percentage change in the value of the dependent variable associated with a 1 percent change in the value of an independent, or explanatory, variable. The coefficients on the dummy variables are elasticities in decimal form. Most of our results are consistent with the economic reasoning that underlies our model as well as with the results from several previous studies, including our 2000 report. We found that DBS companies’ provision of local channels is associated with significantly higher DBS penetration rates. As shown in table 3, our model results indicate that in cable franchise areas where local channels are available from both DBS providers, the DBS penetration rate is approximately 32 percent higher than in areas where local channels are not available via satellite. This finding suggests that in areas where local channels are available from both DBS providers, consumers are more likely to subscribe to DBS service, and therefore DBS appears to be more able to compete effectively for subscribers than in areas where local channels are not available from both DBS providers. Several additional factors also influence the DBS penetration rate. Our model results indicate that the DBS penetration rate is greater in nonmetropolitan areas and in cable franchise areas that are outside the largest television markets, as measured by the number of television households in the market. These two factors can be associated with the historical development of satellite service, which had been marketed for many years in more rural areas. Additionally, the DBS penetration rate is higher in areas that require a relatively higher angle or elevation at which the satellite dish is mounted and is lower in areas where there are more multiple-dwelling units. These two factors can be associated with the need of DBS satellite dishes to “see” the satellite: a dish aimed more toward the horizon (as opposed to being aimed higher in the sky) is more likely to be blocked by a building or foliage and people in multiple-dwelling units often have fewer available locations to mount their dish. We did not find that DBS companies’ provision of local broadcast channels is associated with lower cable prices. In table 3, the estimate for this variable is not statistically significant, and we therefore cannot reject the hypothesis that provision of local channels has no impact on cable prices. However, we found that cable prices were approximately 17 percent lower in areas where a second cable company—known as an overbuilder— provides service. Additionally, cable prices were higher when the cable company was affiliated with 1 of the 10 largest MSOs. This result indicates that horizontal concentration could be associated with higher cable system prices. Finally, cable prices are higher in areas where the cable company provides more channels, indicating that consumers generally are willing to pay for additional channels and that providing additional channels raises a cable company’s costs. We also found several interesting results in the cable subscriber and cable channel equations. In the cable subscribers’ equation, we obtained an estimate of the price elasticity of demand for cable services that was lower (in absolute value) than the estimate in our previous report. In the cable channels equation, our model results indicate that local service is associated with improved cable quality, as represented by an increase in the number of channels provided to subscribers. In areas where both DBS companies provide local channels, we found that cable companies offer subscribers approximately 6 percent more channels. This result indicates that cable companies are responding to DBS provision of local channels by improving their quality, as reflected by the greater number of channels. Also, cable franchises offered fewer channels (approximately 4 percent fewer) when the company was vertically integrated with a programming network. Finally, we present reduced-form cable price and DBS penetration equations (see table 4) in which the exogenous variables in the system are included to show the net effects on cable prices and DBS penetration rates of the exogenous variables. In the reduced-form equation, the estimates for local broadcast service include both the direct effects—as measured in the 3SLS system of structural equations—and indirect effects. Consistent with the 3SLS system, local channels are associated with significantly higher DBS penetration rates. Where local channels are offered by both DBS providers, DBS penetration rates are approximately 33 percent higher than in areas where local channels are not available. Also, DBS penetration rates are higher in nonmetropolitan areas, smaller television markets, and places where the dish elevation is at a greater angle. Again, we cannot reject the hypothesis that provision of local channels via satellite has no impact on cable prices. But cable prices are approximately 15 percent lower in franchise areas where a second cable company provides service, while prices are approximately 6 percent higher when the cable company is affiliated with 1 of the 10 largest MSOs. We considered an alternative specification under which we expanded the definition of local channels to include markets where only one DBS provider offered local channels. In 2001, there were seven markets where only one DBS provider, but not both, offered local channels. By expanding our definition of local channels to include markets where either DBS company offered local channels, our data set contained an additional 35 observations (4.9 percent of all observations) defined to have local channels. The results are generally consistent with our primary specification. In both the 3SLS system of structural equations and the reduced-form equation, DBS provision of local channels is associated with significantly higher DBS penetration rates. Further, the estimate for the local channels variable is not statistically significant in the cable price equation, and we therefore cannot reject the hypothesis that provision of local channels has no impact on cable prices. We considered another alternative specification using 3 years of cable rate and channel data in a single-equation specification. As part of its annual survey, FCC requested that cable companies report their cable rates and number of channels provided for 1999 to 2001. Using these data, we regressed cable rates on the number of cable channels provided, dummy variables for DBS provision of local broadcast channels (on the basis of the amount of time the service was available), and year and cross-section (i.e., cable franchise) dummy variables. In this panel model, we found that DBS provision of local broadcast channels was associated with higher cable rates. Because we lacked DBS penetration rate data for the 3-year period, we were unable to examine the impact of local channels on DBS penetration rates. In addition to those named above, Wendy Ahmed, Stephen M. Brown, Michael Clements, Michele Fejfar, Rebecca L. Medina, Hai Tran, and Mindi Weisenbloom made key contributions to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
Direct broadcast satellite (DBS) television service has grown to become the principal competitor to cable television systems. In October 2001, the two primary DBS companies, EchoStar and DirecTV, proposed a merger plan that is pending before the Department of Justice and that the Federal Communications Commission (FCC) recently announced that it had declined to approve. GAO was asked to examine several issues related to competition in providing subscription video services, including the competitive impact of the availability of cable modem Internet access, and the effects on cable prices and DBS penetration rates of DBS' offering local broadcast channels. GAO also examined the technical capability of the individual DBS companies to expand local channel services into more television markets. This report offers no opinion on the merits of the proposed merger. DBS and cable companies compete for subscribers to their video services and to their Internet access services, although to date, cable modem service is the most popular method of broadband home Internet access. On the basis of a random survey of 3,000 individuals, it appears that the availability of Internet access services is important for some consumers--although not the majority of consumers--when they are considering various video service providers. In 1999, DBS companies began to offer local broadcast channels in select television markets across the country. According to results from GAO's econometric model, the provision of local broadcast channels by DBS companies is associated with significantly higher DBS penetration rates, although GAO found no evidence that DBS provision of local channels influences cable prices. In general, GAO's model results suggest that DBS is able to compete more effectively for subscribers with cable in areas where DBS subscribers can receive local broadcast channels. The two DBS companies have stated that if they merge, they will, as a combined entity, have sufficient satellite capacity to provide local broadcast programming in all 210 television markets and to introduce new services. GAO's technical expert's review of various documents related to the two DBS companies' satellite capacity indicates that--given current technologies and deployed assets--neither company would individually be able to offer all of the local channels in all markets. However, the decision of whether to introduce more local channels is, in the long term, a business decision. Whether the benefits would outweigh the costs for the individual companies to eventually offer local channels in all 210 television markets is not clear. Both FCC and the Department of Justice declined to provide comments on the substance of this report because of the merger proceedings.
Both mutual fund companies and banks are financial intermediaries, that is, they raise funds from savers and channel these funds back to the economy by investing them. Banks generally use their deposits either to make loans or to invest in certain debt securities, principally government bonds. Mutual funds do not make loans, but they do invest in securities, primarily bonds and stocks. Money from these funds, in turn, flows either directly (through primary securities markets) or indirectly (through secondary securities markets) to the issuers of such securities. Long before the recent mutual fund boom, the relative importance of bank loans as a source of finance had been declining. As early as the 1960s, some large businesses had been replacing their usage of bank loans by issuing short-term securities called commercial paper. Subsequently, more companies found ways to tap the securities markets for their financial needs, lessening their dependence on bank loans. For example, corporations’ reliance on bank loans as a percentage of their credit market debt declined from 28 percent in 1970 to 20 percent in 1994. The household sector (generally residential and consumer borrowers) also has become less dependent on bank loans for the ultimate source of financing. Beginning in the mid-1970s, and to a much greater extent since the early 1980s, major portions of home mortgage portfolios have been sold by banks and thrifts to financial intermediaries who use them as collateral for marketable securities and then sell the securities to investors. More recently, significant amounts of consumers’ credit card debt and automobile loans have been similarly financed by securities instead of bank credit. Through securitization, banks and thrifts provide the initial financing for these mortgage, credit card, and automobile loans. However, once the loans are sold, it is the securities market that is the ultimate source of financing. More broadly, the term securitization describes a process through which securities issuance supplants bank credit as a source of finance, even if the borrower originally received funds from a bank. In addition, the relative importance of bank loans has been further diminished by the increased provision of direct loans by nonbank financial intermediaries, including securities firms, insurance companies, and finance companies. In this report, discussion of the “impact of mutual funds on deposits” or of the “movement of money from deposits to mutual funds” refers not merely to direct withdrawal of deposits by customers for the sake of investing in mutual fund shares but also to customers’ diversion into mutual funds of new receipts that otherwise might have been placed in deposits. To assess the impact of mutual funds on deposits, we examined and compared available data published by industry sources and the bank regulators. Data on deposits in banks are routinely reported to and published by the bank regulators. Data on mutual funds are gathered and published by an industry association, the Investment Company Institute (ICI). Moreover, the Federal Reserve maintains and publishes the Flow of Funds Accounts, which is an attempt to capture the entire framework of financial transactions in the economy, including all major groupings of participants and instruments. This publication includes the bank data and mutual funds data that we used (the Federal Reserve obtains the mutual fund data from ICI). In the Flow of Funds Accounts, the Federal Reserve presents statistics on (1) the amounts outstanding at the end of each quarter and each year and (2) the net flows during each quarter and each year. For bank deposit information, the change of the level from one period to the next is used to determine the net flows into or out of deposits during that period. The same method is used for money market mutual funds, where the funds’ managers intend to maintain the value of a share constant at one dollar on a daily basis. For longer-term mutual funds, however, the period-to-period change in the fund’s value generally does not equal the net flows during the period because the value fluctuates with (1) the flows of customer money, (2) the changing prices of the stocks and bonds held by the mutual funds, and (3) the reinvestment of dividends and interest in the fund. In the Flow of Funds Accounts, the net flows into mutual funds are calculated from industry data on changes in amounts outstanding and adjusted for movements of security-price averages. To assess the impact of mutual funds’ growth on the total supply of loanable and investable funds, we examined the Flow of Funds Accounts data on the sources of finance for the economy. In addition, we did a literature search for research articles examining (1) how residential, consumer, and business borrowers obtain financing, not only from bank loans or securities issuance but also from other sources and (2) how lenders, including banks as well as nonbank providers such as finance companies, funded the financing they provided and whether they sold or securitized their finance. We supplemented our search of the statistical sources with other material. We used research articles published by the Federal Reserve and documents published by securities industry sources over the last 5 years. In addition, we interviewed Federal Reserve experts on the previously mentioned topics. We also drew upon information gathered from banks and mutual fund specialists who were interviewed for an ongoing related GAO assignment. The Federal Reserve provided written comments on a draft of this report. These comments are discussed on page 15. We did our review in Washington, D.C., from March 1994 to November 1994 in accordance with generally accepted government auditing standards. The Federal Reserve and the Securities Industry Association (SIA) agreed that the flow of funds into mutual funds has had a significant impact on bank deposits. Although some observers dispute the magnitude of this impact, the evidence we reviewed supports the view that mutual funds have attracted sizable amounts of money that otherwise might have been placed in bank deposits. At year-end 1994, the amount of money in mutual funds ($2,172 billion) was considerably less than that in bank deposits ($3,462 billion). The mutual fund total, however, had risen by almost $1.2 trillion since year-end 1989, most of it from net new inflows, while the deposit total was $89 billion less than at year-end 1989. Despite these data, some observers maintain that deposits have not been a major source of the flow of money into mutual funds in recent years. For example, one study by a securities firm claims that “mutual fund inflows do not depend on outflows from the banking system,” arguing that “net new savings” are more important. ICI, a mutual funds industry association, stated that “CD proceeds play minor role as source for investment in stock and bond mutual funds,” and that “current income” and “the proceeds from other investments” were far more important. Nonetheless, most observers whose studies we reviewed agree that mutual funds have had a significant effect on bank deposits. Federal Reserve publications state that there has been a movement from deposits into mutual funds. The same view is propounded by SIA. Moreover, in a 1994 survey of 205 bank chief executives, nearly half said that their banks had started selling mutual funds in order to retain customers. We did not find any reliable quantification of the full impact of mutual funds on deposits, including both the direct withdrawals and customers’ diversion of new receipts that otherwise might have been placed in deposits. We assessed two quantitative approaches: (1) the total net flows into mutual funds and (2) ICI’s estimate of the impact on deposits. Because both approaches were incomplete, we examined a third alternative: the relationship between deposits and overall economic activity. This third approach also has limitations because there are a variety of factors that affect the relationship between deposits and gross domestic product (GDP). Nonetheless, it provided a more comprehensive look than the other approaches. Using the ratio of deposits to GDP as a benchmark, we estimated that—for the period 1990 through 1994—the total impact of mutual funds on deposits may have been sizable, but probably less than $700 billion. The total net flows into mutual funds from all sources during 1990 through 1994 were $1,067 billion. (See table 1.) The impact on deposits had to be less than this amount because the evidence indicated there were also flows into mutual funds from nondeposit sources. For example, some of the money placed in mutual funds by the household sector probably derived from the sales of stocks and bonds since, in 1991 and 1993, the household sector sold more individual securities than it bought. (See table 2.) Another possible source of flows into mutual funds was the frequent occurrence of sizable lump-sum distributions to individuals from retirement plans and job-termination arrangements. According to both the Federal Reserve and SIA, much of this money was placed in mutual funds by the recipients. SIA’s estimate of the impact of mutual funds on deposits was incomplete because it dealt only with the direct impact, i.e., the withdrawal of existing deposits for the sake of investing in mutual funds. Even this estimate of the direct impact was incomplete because it was primarily based on net withdrawals of banks’ time deposits, rather than total deposits. Using time deposits as a measure, SIA stated that the flow from deposits into mutual funds could have been about $200 billion in 1992 and 1993 combined. In fact, during this period declines in time deposits were largely offset by increases in demand deposits. Since there is no reporting of either the destinations of deposit withdrawals or of the origins of deposit placements, we cannot be certain whether time deposit withdrawals went into mutual funds or if part of them went into demand deposits. In any event, we found no estimates of the indirect effects, i.e., the diversion of new receipts into mutual funds rather than into deposits. Such a measure is more important in a growing economy because, even if deposits are growing, they may not be growing as fast as they would absent the diversion to mutual funds. We attempted to derive a reasonable estimate of the combined direct and indirect impact of mutual funds on deposits by examining the relationship of deposits to total economic activity, as measured by GDP. In figure 1, the solid line shows that the relationship of deposits to GDP remained fairly stable for most of the last 30 years. With only one exception, it stayed within a band of 63 percent to 73 percent every year from 1963 through 1990. Large flows into mutual funds in the 1980s (shown in figure 1 by the gap between the solid line and the dotted line) did not push the deposit-to-GDP ratio outside this band. However, in the early 1990s the deposit-to-GDP ratio moved significantly below the band, dropping to 51 percent in 1994. The ratio of mutual funds to GDP has been rising since the early 1980s, but only since the late 1980s has the rise in mutual funds-to-GDP ratio been roughly equal to the decline in the deposit-to-GDP ratio. This apparent substitution or movement of money into mutual funds rather than bank deposits has been, at least in part, the result of historically low interest rates paid on bank deposits compared to expected risk-adjusted returns on mutual fund investments. If the gap between deposit rates of return and expected mutual fund rates of return narrows, this movement of funds out of deposits could slow or even reverse itself. We calculated what the deposit volumes would have been had the deposit-to-GDP ratio stayed at the lower end of its previous band, i.e., 63 percent. Using this benchmark, total deposits would have grown $695 billion during 1990 through 1994. Because deposits actually declined by $89 billion, this indicates a potential impact of $784 billion. Comparing actual deposits with the low end of the previous band is conservative. A deposit-to-GDP ratio nearer the middle of the band would indicate a larger shortfall. Nonetheless, it must be stressed that the deposit-to-GDP ratio has been pushed down by a number of factors in addition to a movement of deposits into mutual funds. These factors include a dramatic downsizing of the savings-institution industry, a decline in loans at commercial banks, and a shift by banks into greater use of nondeposit funding sources. We were unable to determine exactly how much of the decline in the deposits-to-GDP ratio can be attributed to the impact of mutual funds. Nonetheless, on the basis of the above analysis, we concluded that a reasonable estimate of the impact was sizable but probably less than $700 billion. The movement of money from bank deposits to mutual funds should have little if any effect on the total supply of loanable and investable funds available to the economy, even though this movement may have shifted the intermediaries through which finance flows. Both types of intermediaries (banks and mutual fund companies) generally invest a substantial portion of the funds they receive. As noted earlier, the share of bank loans in total finance was being reduced by securitization of assets long before mutual funds surged to prominence as competitors for customers’ dollars. Mutual funds have further advanced this securitization process. Both mutual funds and banks generally invest a substantial portion of the funds they receive, with the mutual funds investing mainly in securities and the banks investing in loans and certain kinds of securities. Thus, at the same time that a sizable amount of customer money went from bank deposits to mutual funds, the funds’ purchases of securities became a greater source of new finance to the economy than bank lending. In 1992 and 1993, about two-fifths of the net new funds flowing to the domestic nonfinancial sectors of the economy came via mutual funds, while the share that flowed via banks was about one-fourth of the net new funds. By and large, it was not possible to determine who “receives” the mutual funds’ investments. Unlike bank lending, where the money goes directly from the lending bank to the borrower, mutual funds’ investments largely flow through the securities markets, since most of the funds’ purchases are of tradable securities. (A relatively small but interesting exception occurs with so-called “prime-rate” mutual funds, which purchase securitized bank loans.) As large amounts of customers’ money flowed into mutual funds in the early 1990s, the funds’ investments in securities added liquidity to the securities markets generally. This liquidity not only improved conditions for existing issuers desiring to raise additional money but also may have made it easier for a broader range of borrowers to tap the securities-issuance markets. Availability of finance for the three different borrower sectors—residential, consumer, and business—could be disproportionally affected by the movement of funds out of bank deposits and into mutual funds, even when the total supply of loanable and investable funds is not affected. Because mutual funds invest mainly in securities, it is possible that those who issue securities might increase their access to finance at the expense of those who do not. Unfortunately, there is no way to measure the extent to which this has occurred from the statistical information available. All three sectors obtain some of their financing through the securities markets, either through their own issues or via the intermediaries from which they obtain credit. Because significant amounts of finance flow through the latter intermediaries, we were unable to determine to what extent, or even whether, any of these sectors may face more difficulty in obtaining finance than they had previously experienced. However, we were able to determine that all three sectors increased their access to finance raised in the securities markets, although the degree varies by sector. In addition, we can describe the indirect channels through which securitization affects the availability of credit for these sectors, even though these indirect effects cannot be quantified. Residential finance has been extensively securitized. Although individual homeowners go to banks, thrifts, or mortgage companies for their mortgages, most residential mortgages are written in a way to facilitate their subsequent securitization. By the end of 1994, only 34 percent of the total value of home mortgages outstanding was directly held by commercial banks and thrifts, down from a two-thirds share in 1980 (see table 3). Nonetheless, banks and thrifts are now also providing indirect financing to homeowners: in addition to their (reduced) direct holdings of mortgages, they invest in mortgage-backed securities. Consumer credit is still largely provided by commercial banks. As of year-end 1994, 63 percent of consumer debt (nonmortgage) was held by depository institutions. Banks continue to actively originate consumer credit. Since the late 1980s, however, banks and other providers of consumer finance have securitized some of their automobile loans and credit card receivables, resulting in the securitized portion of consumer debt rising from zero in 1985 to 14 percent in 1994. (See table 4.) Moreover, consumers have another avenue of indirect access to the securities markets: borrowing from finance companies. These companies obtain two-thirds of their funds by issuing their own securities. We examined the supply of finance to the corporate sector for the years 1990 through 1994, when the greatest inflow into mutual funds occurred and when deposit growth was small or negative. During the first 4 years of this period, the amount of outstanding bank credit to nonfinancial corporations declined every year. (See table 5.) Not all corporations reduced their bank loans, of course, but the declines outweighed the increases. In 1994, for the first time during this period, there was an increase in outstanding bank credit to nonfinancial corporations. In the first year of this period, 1990, the corporate sector did not offset declining bank loans by increased issuance of securities. In fact, the sector redeemed more securities than it issued. Thereafter, however, corporations far surpassed previous records for raising new funds on the securities markets. Net issuance averaged $100 billion annually in 1991 through 1993, compared with a previous single-year record of $55 billion. In 1994 there was a sharp falloff of net securities issuance by the corporate sector along with renewed growth in bank loans. The flow of liquidity from mutual funds into the securities markets enhanced the capacity of the securities markets to absorb these new issues. From 1990 through 1994, mutual funds made net purchases of corporate securities averaging $104 billion annually. Mutual funds not only purchased the securities of large corporations. They also were major purchasers of shares of smaller companies issuing stock for the first time as well as major purchasers of bonds issued by companies whose debt was not highly rated (so-called junk bonds). For those business borrowers who are unable to issue securities, there are indirect ways in which funding from the securities markets can flow to them. For example, just as finance companies channel funds from the securities markets to consumers, it is common for finance companies to lend to middle-sized companies that otherwise would borrow from banks. Even in the “noncorporate, nonfarm business sector,” where the borrowers tend to be quite small, finance companies supply about a fifth of total market debt. As another example, some business financing is funded by certain mutual funds that invest primarily in business loans bought from the originating banks. There is a possibility that those small businesses that are primarily dependent on small banks for their loans could experience reduced credit availability if their banks lost deposits to mutual funds. This could happen if neither these businesses nor their banks could readily obtain financing from other credit suppliers or from the capital markets. Available evidence shows that small businesses are more dependent on bank loans than large businesses. Whereas bank loans comprise about one-eighth of the debt of the corporate sector as a whole, a 1989 survey cited by the Federal Reserve suggested that small businesses get almost half of their debt financing from banks. Nonetheless, by implication, the average small business gets about half of its debt financing from nonbank sources. Some small businesses raise money by issuing securities. According to the Federal Reserve, many of these firms probably benefitted from the more receptive conditions in the markets in recent years. However, small businesses with less than $100 million in annual sales generally would not be able to sell securities. Nonetheless, small businesses can be indirect beneficiaries of mutual funds’ investments, via the securities issued by finance companies that extend credit to small businesses. As another conduit, one securities firm has extended about $1 billion in credit lines to small businesses. Regarding the access of small businesses to bank loans, the movement of money out of deposits and into mutual funds does not necessarily mean that the availability of bank loans will be reduced. If the lenders are regional banks or larger, they may be losing some of their loan volume to securitization either because they are securitizing their own assets or because their corporate customers are turning to securities issuance. In this case, more of the remaining deposits of these banks should be available for lending to small businesses. Nonetheless, presumably there is some portion of small businesses that is solely or heavily dependent on small banks for their credit. These borrowers might be affected if their banks lose deposits to mutual funds. Because some small banks’ borrower base is concentrated in small business, their clientele is not likely to reduce loans by switching to securities issuance. Thus, a cutback of these banks’ funding sources would probably not be accompanied by a reduction of loan demand. Therefore, some small banks might have to respond to a loss of deposits by cutting back on loans outstanding. However, such cutbacks are only a hypothetical possibility. Recently, banks with $250 million or less in assets have had ample liquidity in the form of their holdings of bonds and other securities in their investment accounts. The ratio of securities to total assets averaged over 33 percent in 1993 and 1994 compared with an average of about 28 percent for much of the 1980s. If faced with a loss of deposits, a number of small banks presumably could fund existing and new loans by selling these securities. In sum, the channels of financing are quite varied; for the most part, a shift of customers’ money from deposits into mutual funds need not reduce credit availability for any group of borrowers. There remains the possibility that some borrowers from small banks might face credit availability constraints in certain circumstances, but it is not clear whether those circumstances currently exist. We received written comments on a draft of this report from the Federal Reserve. In its letter, the Federal Reserve stated that the report provides a timely review of the flow of funds between mutual funds and bank deposits and the effect of these flows on credit availability. The Federal Reserve said it had no further comment regarding the report or its content because the report made no recommendations to the Federal Reserve. We are sending copies of this report to the Chairman of the Board of Governors of the Federal Reserve System and other interested parties. We will also make copies available to others upon request. The major contributors to this report were John Treanor, Banking Specialist, Stephen Swaim, Assistant Director, and Robert Pollard, Economist. If you have any questions, please contact me at (202) 512-8678. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined whether the movement of funds from bank deposits into mutual funds affects the availability of credit for residential, consumer, or commercial purposes. GAO found that: (1) the amount of money in mutual funds grew from $994 billion at year-end 1989 to $2,172 billion at year-end 1994, mainly due to an increase of net customer inflows; (2) during the same period, bank deposits declined from $3.55 billion to $3.46 billion; (3) as much as $700 billion of the growth in mutual funds may have come at the expense of bank deposits between 1990 and 1994; (4) the movement of money into mutual funds has resulted partly from the relatively lower interest rates paid on bank deposits, but this should have little effect on the total supply of loanable and investable funds, since mutual funds also lend or invest a major portion of the funds they receive; (5) there was insufficient data on whether the different categories of borrowers were affected by the shift of money from bank deposits to mutual funds; (6) all categories of borrowers have recently increased their access to financing obtained through the securities markets; and (7) flows of deposits out of smaller banks could reduce the availability of finance for small businesses whose primary source of finance is loans from such banks.
HCFA, an agency within the Department of Health and Human Services (HHS), is responsible for administering much of the federal government’s multibillion-dollar investment in health care—primarily the Medicare and Medicaid programs. Rapid increases in Medicare program costs, coupled with increasing concern about fraud and abuse in the program, led the Congress to enact legislation—HIPAA and the BBA—to strengthen Medicare. HIPAA established the Medicare Integrity Program, which ensures increased funding for Medicare program safeguard efforts and authorizes HCFA to hire specialized antifraud contractors. The BBA made the most significant changes to Medicare in decades, designed to reduce the growth of Medicare spending. The law requires HCFA to implement new payment methodologies, expand managed care options, and strengthen program integrity activities. At the same time, these laws also added entirely new responsibilities—such as oversight of private health insurance and implementation of a new state children’s health insurance program—to HCFA’s historic mission to administer Medicare and Medicaid. 17 percent of all Medicare beneficiaries—were enrolled in more than 450 managed care plans as of December 1, 1998. Medicaid, a $177 billion federal and state grant-in-aid entitlement program administered by states, finances health care for about 36 million low-income families and blind, disabled, and elderly people. At the state level, Medicaid operates as a health insurance program covering acute-care services for most recipients, financing long-term medical care and social services for elderly and disabled people, and funding programs for people with developmental disabilities and mental illnesses. In addition, the BBA created the state-operated Children’s Health Insurance Program, which provides federal grants to states to provide basic health insurance coverage for low-income, uninsured children. Through this program, states have a choice of either expanding their Medicaid programs or developing a separate program to insure children. Under HIPAA, HCFA also has a completely new responsibility for ensuring that private health insurance plans comply with federal standards. In five states that did not pass legislation conforming to key provisions of HIPAA, HCFA has direct responsibility for enforcing HIPAA standards for individual and group insurance plans. In addition, HIPAA, along with the BBA, provides HCFA more opportunities to improve its fraud and abuse identification and prevention programs in Medicare. HCFA had about 4,100 staff as of October 1998. About 65 percent were located in the central office and the remainder worked in the agency’s 10 regional offices. In addition to its workforce, HCFA oversees Medicare claims administration contractors who employed an estimated 22,000 people in fiscal year 1997. Last year, we told you that substantial program growth and greater responsibilities appeared to be outstripping HCFA’s capacity to manage its existing workload. Today, the message is a more complicated one. HCFA has made great strides in addressing many of its immediate priorities—including readying critical computer systems for the year 2000 and implementing many provisions of HIPAA and the BBA. But the number and complexity of the BBA’s requirements and the urgency of systems changes, coupled with a backlog of decades-old problems associated with HCFA’s routine operations, make it clear that much more needs to be accomplished. Over the past year, HCFA has made a concerted effort to deal with its most pressing priority—the Year 2000 computer systems problem—commonly referred to as Y2K. If uncorrected, Y2K problems could cause computer systems that run HCFA’s programs to shut down or malfunction, resulting in serious disruptions to payments to Medicare providers and services to Medicare beneficiaries. Addressing Y2K is a formidable task for HCFA, because the Medicare program uses 6 standard claims processing systems, about 60 private contractors, and financial institutions nationwide to process about 900 million Medicare claims each year for about 1 million hospitals, physicians, and medical equipment suppliers. In September 1998, we reported that time was running out for HCFA to modify Medicare systems to handle Y2K. HCFA was severely behind schedule in repairing and testing its systems and in developing contingency plans to handle system failures. Until 1997, HCFA was attempting to develop the Medicare Transaction System—which would be Y2K compliant—to replace its existing Medicare claims processing systems. But the project was halted because of design problems and cost overruns. This left HCFA with multiple, noncompliant Medicare claims processing systems that needed modernization. Compounding this difficult task was HCFA’s failure to adequately direct and monitor its Y2K project. We recommended changes to better manage its Y2K efforts, and HCFA agreed to implement our recommendations as soon as possible. HCFA recently reported to HHS that as of December 31, 1998, it had completed renovating 5 of the 6 standard systems used by its contractors to pay claims and all 25 of its mission-critical internal systems. We are now monitoring HCFA’s progress in implementing the recommendations in our September 1998 report, and we are reviewing the agency’s progress in addressing the critical areas of Y2K testing and business continuity and contingency planning. We will testify on these issues to the Congress in the next few weeks. Furthermore, although HCFA is not directly responsible for state Medicaid enrollment and payment systems, agency officials said they are concerned that some state systems may fail. To help prevent this, the agency has begun to work with states on their Y2K problems. modify to achieve Y2K compliance are obsolete and will need to be replaced soon after the year 2000. Y2K presented an immediate problem with an inflexible end point, which has forced HCFA to shelve its efforts to consolidate its Medicare claims processing systems and modernize other systems. After the termination of the Medicare Transaction System, HCFA decided to consolidate the number of systems that pay claims to reduce systems maintenance costs and streamline efforts to implement required systems changes. But systems consolidation could not go forward while HCFA and its contractors were renovating and testing their systems for Y2K readiness. As a result, it is spending millions to renovate certain systems for Y2K readiness that it plans to stop using soon after 2000. HCFA has completed many major tasks this past year and has implemented significant portions of HIPAA and the BBA, but progress remains slow. For example, HCFA has taken steps to allocate HIPAA funding and to implement authorities to combat waste and abuse in the Medicare program. HIPAA provided additional funds for HCFA’s Medicare claims processing contractors to use to detect fraudulent and abusive billing practices. The claims administration contractors use these funds to hire and retain staff knowledgeable in conducting provider audits, claims reviews, and payment data analyses, among other activities. HCFA promptly issued the contractors’ fiscal year 1999 budget allocations, unlike the situation in fiscal year 1998, when HCFA did not provide this funding to the contractors until a third of the year had passed. As part of HIPAA, the Congress also gave HCFA the authority to contract with specialists to perform payment safeguard activities. HCFA is now reviewing the submissions it received in response to its September 1998 solicitation for bids to become a program safeguard contractor. Such a contract could be awarded by May 1999, but the scope will be limited and will not provide many of the benefits initially envisioned from using a specialty contractor. took several steps toward implementing the new National Medicare Education Program last year. The regulations, published in June 1998, represented a massive undertaking accomplished within a very short time period. In rushing to reach the deadline, however, some of the provisions were developed without full consideration of their impact on managed care organizations. For example, the regulations required that managed care plans assess the health status of all new Medicare members within 90 days of enrollment, but this requirement would include existing plan members for whom the plan may already have comprehensive information. Similarly, the regulations require each managed care organization’s chief executive officer to certify that the encounter data provided to HCFA are 100-percent accurate. To managed care plans, such a standard seems unreasonable because these data are generated from many sources not directly under their control, including contracting physicians, hospitals, and other providers. In addition, managed care plans are concerned that other requirements cannot realistically be accomplished in the required time frames, may be duplicative of existing accreditation and reporting requirements, and could create disincentives to work on more difficult quality improvement projects. HCFA has agreed to reconsider a number of items and is planning to change the standard for data accuracy so that plans’ chief executive officers will certify to the best of their knowledge that the data provided to HCFA are accurate. For the new National Medicare Education Program, HCFA established an eight-point plan for educating beneficiaries about their new managed care options; implemented an Internet site for providing comparative managed care plan information; and has begun phasing in its toll-free call center and its mail-out of a revised Medicare handbook to beneficiaries in five states, which foreshadowed the nationwide mail campaign planned for this fall. The effort to produce Medicare handbooks was more complicated than the agency originally expected. Of the 15 comparative handbooks mailed to beneficiaries in different geographic areas, 12 were inaccurate because HCFA published them before managed care plans finalized their Medicare participation decisions. The Congress’ efforts to encourage the growth of Medicare managed care could be thwarted if plans refuse to participate and if beneficiaries are confused, instead of enlightened, about their many health care choices. providers—regardless of their costs—fixed, predetermined amounts that vary according to patient need. To meet BBA targets, HCFA has to design and implement four PPS systems: a skilled nursing facility (SNF) PPS by July 1, 1998; a home health agency PPS by October 1, 1999, which was delayed by later legislation until October 1, 2000; a hospital outpatient PPS by calendar year 1999; and an inpatient rehabilitation PPS by fiscal year 2001. The SNF PPS was implemented on July 1, 1998. However, to prevent additional complications during system renovation and testing for Y2K, the agency has missed deadlines to make systems changes needed for beginning the hospital outpatient and home health agency prospective payment systems. These delays could affect both budgetary savings and Medicare beneficiaries themselves. The Congressional Budget Office had estimated that new payment methods for home health and outpatient services would save Medicare about $23 billion between fiscal years 1998 and 2002. In addition, the hospital outpatient PPS would have reduced the amounts elderly patients pay for such services. HHS estimated that between January 1999 and April 2000, senior citizens will have to pay an extra $570 million in higher copayments over what they would have paid if the hospital outpatient PPS had been implemented on time. While many Medicare beneficiaries have some sort of third-party coverage for costs that Medicare does not cover—referred to as “Medigap” policies—they are likely to be indirectly affected because premiums for Medigap policies are increasing in line with rising Medicare costs. Although HCFA officials were tracking both BBA and Y2K implementation, top agency officials did not inform the Congress until July 1998 that the agency would be delayed in instituting the new payment methods. HCFA officials attributed their late awareness of this problem to communications breakdowns at three levels. First, they believe operations and policy staff at headquarters responsible for designing the program changes were not consulting with each other and with others who were responsible for implementing them in the field. Second, they stated that top agency officials did not immediately find out what lower-level HCFA managers knew—how long it would take to implement complex BBA changes and how that could complicate Y2K testing of the systems. Finally, officials believe that there was inadequate consultation with Medicare contractors responsible for making the actual programming changes to their systems. While some parts of the BBA implementation were put on hold, HCFA moved quickly to implement a new SNF PPS. However, we believe that the SNF PPS has design flaws, and coupled with a lack of adequate planned oversight, this may diminish the anticipated reduction in Medicare costs that prospective payment was supposed to create. Savings depend on developing an appropriate daily payment (per diem) rate to reflect patients’ needs. The new daily payment rate is based on the average daily cost of providing all Medicare-covered skilled nursing services, adjusted to take into account the patient’s condition and expected care needs. We are concerned that the new SNF PPS’ design preserves the opportunity for providers to increase their compensation by supplying potentially unnecessary services, since the amounts paid still depend heavily on the number of therapy and other services patients receive. Furthermore, HCFA has not planned sufficient oversight to prevent fraud and abuse. For SNFs, a facility’s own assessment of its patients will determine whether a patient is eligible for Medicare coverage and how much will be paid. When Texas implemented a similar payment method for Medicaid, its on-site reviewers found that nursing homes’ assessments were often inflated. Despite Texas’ experience, HCFA does not currently have plans to monitor facilities’ assessments to ensure they are appropriate and accurate. Nor has it ensured that the Medicare contractors—who pay the facilities’ claims—will have timely information on patients to determine whether the rate to be paid is appropriate. studying HCFA’s and the states’ efforts to implement the Children’s Health Insurance Program and will report on the results later this year. Over the last several years, HCFA has been lax in managing critical ongoing program responsibilities, such as financial management—particularly by Medicare claims administration contractors—and oversight of nursing home compliance. For example, our work on high-risk programs such as Medicare highlighted the need for major federal financial management reforms, which the Congress initially enacted in the 1990 Chief Financial Officers Act and later expanded in the 1994 Government Management Reform Act. Under this legislation, the 24 major departments and agencies such as HCFA must now produce annual financial statements subject to independent audit, beginning with those for fiscal year 1996. Since 1996, in conjunction with its audit of HCFA’s financial statements, the HHS Office of Inspector General (OIG) has estimated the error rate for improper payments made by Medicare claims administration contractors. For fiscal year 1998, the OIG estimated that about 7 percent of Medicare fee-for-service payments for claims—$12.6 billion—did not comply with Medicare laws and regulations. This represents an improvement over fiscal year 1997, when the OIG estimated that Medicare contractors made $20.3 billion in improper payments—about 11 percent of all claims. However, the difference from 1997 to 1998 was almost entirely attributable to better documentation provided to the auditors, rather than to a substantive reduction in improper payments in categories such as “lack of medical necessity,” “incorrect coding,” and “noncovered services.” integrated accounting system that can capture financial information at the contractor level. Moreover, the OIG found indications that HCFA’s central and regional office oversight of operational and financial management controls was inadequate to ensure that contractor-provided financial information was consistent and accurate. Similarly, the OIG found that security for contractor and HCFA information systems was inadequate, imperiling the confidentiality of Medicare beneficiary personal and medical data. While HCFA had corrected some weaknesses found during the audit for fiscal year 1996, it was still possible for an unauthorized user to gain access to HCFA’s database and modify sensitive beneficiary files. HCFA has recognized the need to protect the security of its information systems and, starting in 1997, began revising security policy and guidance, and implementing corrective action plans. Because of the need to focus on Y2K modifications, however, HCFA probably will not address many of these weaknesses in the near term. Medicaid financial management also is in need of reform. The OIG’s 1997 audit revealed that HCFA had limited information on the federal portion of Medicaid accounts receivable and payable. In fiscal year 1997, HCFA relied on survey information from the states to estimate the amounts to record in the financial statements, and because the survey data were so limited, the OIG could not verify their accuracy. In addition, the audit noted that HCFA regional offices were not providing sufficient oversight of states’ Medicaid claims processing and reporting, including states’ efforts to deter fraud and abuse and collect overpayments. homes. HCFA has also added requirements that home health agencies demonstrate experience and expertise in home care by serving a minimum number of patients before initially certifying them as Medicare providers. However, these steps may not go far enough to protect vulnerable beneficiaries. We are now reviewing HCFA’s oversight of state nursing home complaint investigations and inspections and will report to the Congress on these issues this year. Because its mission has been rapidly growing and changing, HCFA officials have worked hard to strengthen the agency’s management capabilities. Despite these efforts, problems remain that hamper effective agency operations. While HCFA has developed a new focus on planning, including publishing a strategic plan, it does not require units to develop detailed plans to carry out day-to-day operations. The agency has completed its reorganization, but the resulting structure has contributed to various communication and coordination problems. Last year, HCFA lacked sufficient trained staff with the skills to effectively implement its top priorities. It hired more staff with needed skills in 1998, but it has not completed a long-term strategic approach to meet its future human resource needs. HCFA staff and managers are also concerned that its performance and award systems are not well linked to accomplishing its mission and that many managers are overburdened and lack managerial skills. These types of problems are found in other agencies, but HCFA still must be diligent in addressing them. The President’s budget for fiscal year 2000 proposes a reform initiative for HCFA that is designed to increase its flexibility in the human resources area and to increase the agency’s accountability. In December 1998, HCFA published its strategic plan, which focused on the organization as a whole and communicated the agency’s vision, mission, and broad approaches to realizing that vision. This plan was developed to help HHS respond to requirements in the Government Performance and Results Act of 1993. In its strategic plan, HCFA clearly states that serving beneficiaries is its primary mission and, in doing so, the agency must be a prudent purchaser of health care. In addition to its overarching strategic plan, HCFA has also produced draft strategic plans for such significant areas as information technology and program integrity. desired outcomes; time frames; and assignments of responsibilities for task completion, are critical. Last year, we reported that HCFA was not planning its activities on a tactical level. Although tactical planning has been used in some specific instances during the past year, such as to help track implementation of BBA requirements, HCFA has still not institutionalized this level of planning in its day-to-day operations. In our interviews and focus groups, a pervasive theme was the need to work in a crisis mode, made worse by a lack of planning. For example, a staff member stated that she was being pulled from one “hot project” to another—which caused her to lose efficiency because she barely managed to master one subject before she was tasked with another. A manager told us that since the reorganization, little planning has taken place in his division, making even simple tasks harder. He said, as an example, that the divisions did not know how much travel money was available until the middle of the fiscal year and that routine trips had to be written up as emergencies to get approval. We heard similar concerns from managers and staff working on data systems and coverage policy. HCFA’s July 1997 reorganization established a totally new structure designed to better focus the agency as a “beneficiary-centered purchaser” of health care. The reorganization created new centers that were intended to respond directly to HCFA’s customers—the Center for Beneficiary Services, the Center for Health Plans and Providers, and the Center for Medicaid and State Operations—and to provide additional resources to Medicare’s growing managed care program. In our January 1998 testimony, we noted that the agency’s staff had not yet moved to the actual location of their new organizational units, which tended to exacerbate problems with internal communication and coordination. Almost a year after the reorganization, between June and August 1998, HCFA completed the physical relocations, placing staff within their new organizational units. Relocation was a major undertaking because HCFA had made dramatic shifts of groups and people. An estimated 80 percent of HCFA central office staff, along with their computers, files, and shared office equipment, were relocated during the move. Managers told us that the physical move was implemented well, minimized work disruptions, and enhanced HCFA’s operational efficiencies. centers to enable them to work more closely together. We found that HCFA is still in the process of learning how to make its new organization work. Several managers said that they believe the quality of decision-making will be enhanced because input from many individuals and groups is required. But other managers and staff reported substantial internal and external communication problems as a result of the reorganization. For example, they said that the organization’s decision-making process has become slow and cumbersome because it is more difficult to identify the key decisionmakers and find meeting times that can fit their busy schedules. We also were told that even identifying appropriate points of contact is sometimes difficult because new organizational titles are confusing. Finally, some managers and staff were concerned that when accountability for issues was shared by more than one center or office, tasks could “fall through the cracks” unless responsibilities were more clearly defined. Agency officials recognize that coordination is a problem and that there is sometimes a lack of accountability for decision-making. In response, they indicated that they are establishing teams on priority projects where key participants are identified and accountability for project completion is placed on one person. HCFA’s reorganization and emerging role as a health care purchaser and beneficiary advocate have also led to changes in the way HCFA communicates with those outside the agency. Some changes, such as those brought on by the Medicare+Choice program and the availability of Medicare and Medicaid information on the Internet, have increased interaction with providers, provider groups, and beneficiaries, according to several HCFA employees. Some staff we spoke with expressed concern about this increased workload and their inability to readily refer people to appropriate HCFA entities because the new organizational lines of responsibility are still unclear. Also, we found that although the Internet means that HCFA is “open 24 hours a day” and can communicate differently through this new medium, neither senior staff nor agency plans have fully addressed the impact of the Internet on HCFA’s workload and how managers might need to reallocate responsibilities. specialists. Senior agency officials told us that the new staff, with skills in areas such as managed care, private insurance, and market research, should help HCFA meet its new and growing responsibilities. We believe that HCFA’s focus on attracting new employees needs to be long term and continuous because it will continue to lose staff whose expertise must be replaced or supplemented. Over the next 5 years, almost a quarter of HCFA’s staff—who make up a large part of the agency’s management and technical expertise—will be eligible to retire. In addition, managers say HCFA will need staff with “real world” expertise in private industry, including those who know how to purchase care competitively. While HCFA has not fully assessed its long-term human resource needs, senior officials told us that the agency is taking initial steps toward developing a long-term plan for investing in its human resources. HCFA currently has a draft human resources plan that covers the years 1999 through 2003. HCFA managers and staff discussed a variety of factors that hamper agency operations and limit effective management. Although we believe that HCFA is not unique in experiencing these problems, mitigating them could improve agency performance. These include a pass/fail performance rating system where virtually all staff pass, an awards program that does not necessarily reward superior performance, and flexible work schedules and locations that limit staff availability. Participants in our focus groups believed that HCFA’s performance appraisal system for nonexecutive staff does not allow managers to meaningfully assess and report on staff performance because virtually everyone receives a passing grade. Staff believed that the pass/fail system is demoralizing to hard workers because no adverse action is taken for unsatisfactory performance. Similarly, according to managers and staff, the performance appraisal system does not give staff a sense of satisfaction when they perform well because it fails to recognize outstanding efforts. Some cited the prior performance system as preferable because exceptional performers could benefit by receiving more rapid pay increases. The Administrator found that the performance appraisal system for executives was also not useful in holding managers accountable and made changes this year to better differentiate senior managers’ performance. The executive appraisal system has changed to a system with five levels of performance. Each executive manager has a performance agreement that is linked to performance goals for his or her set of responsibilities. Many managers and staff members also told us that the current awards program is not working. Although the program is intended to motivate staff, the opinions we gathered suggest that it may have just the opposite effect. Each unit establishes its own panel that makes award decisions and controls award amounts. Panels consist of an equal number of union-appointed and management-appointed representatives. Each panel sets its own criteria for making awards and determining the portion of its awards budget to give to managers for “on-the-spot” awards, which are awarded directly to staff for performance on specific projects throughout the year. Managers told us that they would like to be able to distinguish among the accomplishments of staff members and reward them accordingly, but both managers and staff perceive the awards process as lacking equity and integrity. Any staff member can nominate another for an award, and we were told that staff members sometimes nominate themselves and friends nominate each other. Managers also told us that sometimes almost all nominees in a unit receive awards because panels find it difficult to distinguish among nominees’ performance. One manager who served as a panel member said that during the last fiscal year, about 250 employees were nominated for an award in his center—about two-thirds of all that center’s employees. He said that only five of the nominees did not get an award. Last fiscal year, panels awarded about $678,000 to about 2,200 employees in grades 1 through 15—an average of about $300 per awardee. Managers also directly awarded about $213,000 through on-the-spot awards that can range from $50 to $250. While staff were highly critical of the performance appraisal and awards processes, they approved of the flexibility to set their own work hours and work locations. HCFA’s personnel rules provide for flextime—in which employees may arrive at work at different times each day within core periods or work longer hours in a day and earn time off—and flexiplace—which allows employees to work at alternative locations. Under these rules, however, staff who work in the office only 4 days a week may be off when their managers need them to be in the office. Managers also told us that more time can be taken up with administrative matters as a result of more flexible work arrangements. They said that managing staff is more complicated, noting that planning the work, managing resources, and scheduling meetings is difficult, for instance, when all of the staff are only required to be in the office during a core period from Tuesday through Thursday—3 days a week. Employees need special approval to begin flexiplace, and a senior manager told us that they are now only approving about half of such applications. Some managers and staff discussed their concerns about supervisors’ span of control and the lack of adequate training. They said that they believe some managers are responsible for supervising too many employees and do not have enough time to work with people who could benefit from on-the-job training. They also stated that some managers are not skilled at managing people, which they attribute largely to HCFA’s tradition of promoting staff with excellent technical skills to the managerial level, and not rewarding them for developing their staff. Some also cited the lack of training provided to managers to improve their supervisory skills. Many managers and staff agreed that HCFA does not provide enough training opportunities to help them do their work. We were told that new staff get little orientation to the agency’s organization, programs, goals, and mission. Focus group participants added that limited training and travel funds prevented them from attending seminars and receiving training. Each HCFA staff member received an average of 8 hours of training last year. New staff, who generally were hired within the last year, averaged even fewer hours. HCFA’s senior management has identified management and other training as an area where HCFA must improve. The agency is developing a “model management initiative,” which focuses on matching a manager’s competencies with the specific skills that a manager needs for a given position. If approved by the Administrator, this model will be tested in the Office of the Chief of Operations. Then, if the initiative proves effective, it will be implemented in other parts of HCFA. HCFA is identifying better approaches to providing technical training and has doubled its training budget for next year—from about $800,000 in fiscal year 1998 to about $1.6 million in 1999. its accountability to the Congress by providing biannual reports on its progress. As HCFA moves into the 21st century, its challenges will continue to become more numerous and complex. Once it has finished preparing for Y2K, HCFA must face tasks it has had to put aside or has not fully addressed. Several immediate challenges lie ahead. HCFA must finish and then refine program changes to fully realize the benefits expected from the BBA. It also needs to renovate antiquated, and streamline redundant, computer systems. Furthermore, it needs to strengthen its financial management and efforts to preserve program integrity. Added to these responsibilities will be potential additional challenges associated with any restructuring of Medicare that follows the deliberations of the Bipartisan Commission on the Future of Medicare. Even if no major changes are introduced, HCFA’s continuing challenges are taxing—strong leadership and management will be required to meet them. More effective planning, new staff with needed skills, and better accountability could help HCFA address these challenges and better ensure quality health care for the elderly, poor, and disabled. A true measure of HCFA’s success will be its ability to maintain current momentum as it enters the 21st century. Mr. Chairman, this concludes my statement. I will be happy to answer any questions you or other Members of the Subcommittee may have. Major Management Challenges and Program Risks: Department of Health and Human Services (GAO/OCG-99-7, Jan. 1999). High-Risk Series: An Update (GAO/HR-99-1, Jan. 1999). Medicare Computer Systems: Year 2000 Challenges Put Benefits and Services in Jeopardy (GAO/AIMD-98-284, Sept. 28, 1998). California Nursing Homes: Care Problems Persist Despite Federal and State Oversight (GAO/HEHS-98-202, July 27, 1998). Balanced Budget Act: Implementation of Key Medicare Mandates Must Evolve to Fulfill Congressional Objectives (GAO/T-HEHS-98-214, July 16, 1998). Medicare: HCFA’s Use of Anti-Fraud-and-Abuse Funding and Authorities (GAO/HEHS-98-160, June 1, 1998). Medicare Managed Care: Information Standards Would Help Beneficiaries Make More Informed Health Plan Choices (GAO/T-HEHS-98-162, May 6, 1998). Financial Audit: 1997 Consolidated Financial Statements of the United States Government (GAO/AIMD-98-127, Mar. 31, 1998). Medicaid: Demographics of Nonenrolled Children Suggest State Outreach Strategies (GAO/HEHS-98-93, Mar. 20, 1998). Medicare: HCFA Faces Multiple Challenges to Prepare for the 21st Century (GAO/T-HEHS-98-85, Jan. 29, 1998). Medicare Home Health Agencies: Certification Process Ineffective in Excluding Problem Agencies (GAO/HEHS-98-29, Dec. 16, 1997). Medicare: Effective Implementation of New Legislation Is Key to Reducing Fraud and Abuse (GAO/HEHS-98-59R, Dec. 3, 1997). Medicare Home Health: Success of Balanced Budget Act Cost Controls Depends on Effective and Timely Implementation (GAO/T-HEHS-98-41, Oct. 29, 1997). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the Health Care Financing Administration's (HCFA) progress in: (1) addressing its most immediate priorities; and (2) strengthening its internal management to effectively discharge its major implementation and oversight responsibilities. GAO noted that: (1) HCFA is facing an unprecedented set of challenges; (2) the immediacy and resource demands associated with meeting the year 2000 computer system challenges--coupled with HCFA's late start in addressing them--have put a tremendous burden on the agency this past year and have affected the timing and quality of its work on many other projects; (3) it has also slowed efforts to improve the oversight of ongoing operations, such as financial management and Medicare fee-for-service claims administration, which desperately need attention; (4) even where HCFA has made progress--such as in implementing a number of the mandated Health Insurance Portability and Accountability Act of 1996 and the Balanced Budget Act of 1997 requirements--GAO believes that more work, and many refinements, are still needed; (5) HCFA must meet these challenges with an aging workforce; (6) HCFA has taken a number of steps internally to capitalize on its staff's strengths to deal with a rapidly changing health care marketplace and growing responsibilities; (7) for example, HCFA has developed a strategic plan that better articulates its future direction, has progressed in its customer-focused reorganization by moving staff to their new organizational units, and has hired more staff with needed skills; (8) on the other hand, in focus groups GAO conducted, HCFA managers and staff discussed issues that continue to hamper effective agency operations; (9) to further strengthen HCFA's ability to effectively manage its employees and programs, the administration has proposed new authorities for contracting and new flexibility in hiring in the President's budget for fiscal year 2000; (10) it also proposes new mechanisms to enhance agency accountability, with biannual reports to Congress and an advisory board to help the agency streamline internal and program management; (11) HCFA senior officials have taken concrete steps to improve agency management this year but will need to maintain the momentum over the next several years to overcome the agency's current and future challenges; and (12) this will be especially difficult in an agency that for years has been plagued by external pressures and management problems.
Mr. Chairman and Members of the Committee: We are pleased to be here today to discuss the operations of the Office of Federal Housing Enterprise Oversight (OFHEO) and the status of OFHEO’s efforts to fulfill its mission of helping to ensure the safety and soundness of the two largest housing government-sponsored enterprises: Fannie Mae and Freddie Mac (the enterprises). Congress has a long-standing concern that the safety and soundness of the enterprises be maintained so that they can continue to fulfill their public purposes while taxpayers are protected from unnecessary financial risks. Consequently, Congress passed the Federal Housing Enterprises Financial Safety and Soundness Act of 1992 (the act), which established OFHEO as an independent regulator within the Department of Housing and Urban Development (HUD). Under the act, OFHEO is authorized to help ensure the enterprises’ safety and soundness by setting capital standards, conducting examinations, and taking enforcement actions if unsafe and unsound financial or management practices are identified. As mandated in the Department of Veterans Affairs/HUD Appropriations Act of 1997, we recently issued a report on OFHEO’s implementation of its safety and soundness responsibilities since it began operations in June 1993. We concluded that OFHEO has not yet fully implemented its statutory responsibilities and faces considerable future challenges in doing so. In particular, OFHEO currently does not expect to establish final risk-based capital standards for the enterprises until 1999, even though this process was to have been completed under the act by December 1, 1994. Further, OFHEO has not fully implemented a comprehensive and timely safety and soundness enterprise examination program. Although Fannie Mae and Freddie Mac have been consistently profitable in recent years, we believe it is essential, given the enterprises’ outstanding financial commitments of about $1.5 trillion at year-end 1996, that OFHEO implement its safety and soundness responsibilities as quickly as feasible so that any potential long-term financial risks to taxpayers are lowered. The federal government’s creation and continued sponsorship of Fannie Mae and Freddie Mac have created the perception in the financial markets that the government may choose to provide financial assistance to them in a financial emergency, even though there is no statutory requirement to do so. Recognizing the potential financial risks the enterprises’ activities pose to taxpayers, Congress created OFHEO in 1992 as an independent safety and soundness regulator with wide authority to help ensure that the enterprises’ long-term financial security is maintained. Congress established and chartered the enterprises as government-sponsored, privately owned and operated corporations to enhance the availability of mortgage credit across the nation during both good and bad economic times. It is widely accepted that the enterprises’ activities have generated benefits to home-buyers, such as lower mortgage interest rates. Moreover, the enterprises have reduced regional disparities in mortgage interest rates and spurred the development of new technologies to facilitate the home financing process. However, the potential also exists that the federal government would choose to rescue the enterprises in a financial emergency. OFHEO officials have stated that, despite the enterprises’ consistent profitability in recent years, past financial performance does not guarantee future success. For example, during the 1990s, Fannie Mae and Freddie Mac have rapidly increased the size of their debt-financed mortgage asset portfolios. According to OFHEO, large holdings of debt-financed mortgage assets potentially expose Fannie Mae and Freddie Mac to increased losses resulting from fluctuations in interest rates. For fiscal year 1998, OFHEO has requested a budget of about $16 million to carry out its safety and soundness responsibilities and to perform administrative support functions. As of October 27, 1997, OFHEO had a total staff of 96 individuals consisting of full-time and temporary staff, contract employees, and detailees from bank regulatory agencies. As required by the act, OFHEO is to carry out its oversight function in part by establishing minimum capital standards. Minimum capital is computed on the basis of capital ratios specified in the act that are applied to certain on-balance-sheet and off-balance-sheet obligations. OFHEO has classified Fannie Mae and Freddie Mac as “adequately capitalized” under the minimum standard in each quarter beginning with the quarter that ended on June 30, 1993. The act also mandated that OFHEO develop a stress test to serve as the basis for more sophisticated risk-based capital standards. The purpose of a stress test is to lower taxpayer risks by simulating, in a computer model, situations where the enterprises are exposed to adverse credit and interest rate scenarios and requiring them to hold sufficient capital to withstand these scenarios for a 10-year period, plus an additional 30 percent of that amount to cover management and operations risk. Under the act, the stress test and risk-based capital standards derived from the test were to have been completed by December 1, 1994. However, as of April 1997, OFHEO’s acting director said the organization expected to issue a proposed rule implementing the stress test and risk-based capital standards by September 1998, with the final rule to be issued in 1999. The act also gave OFHEO broad authority and responsibility to examine the enterprises and requires annual on-site examinations. At such examinations, OFHEO staff with the assistance of contractors and bank regulatory detailees are to assess the financial condition of the enterprises and recommend improvements as necessary. OFHEO also has the authority to (1) take enforcement actions, such as cease and desist orders, against the enterprises to stop unsafe practices and (2) place an enterprise into a conservatorship when certain circumstances exist and the enterprise is unable to meet its financial obligations or it is critically undercapitalized. In OFHEO’s planning process and its published documents, the organization has consistently underestimated the time necessary to complete major components of the stress test and risk-based capital standards. For example, in 1995 OFHEO estimated that the final rule would be issued in May 1997, but OFHEO now expects that the process will not be completed until 1999. We identified several reasons why OFHEO did not comply with the statutory deadline and found that OFHEO faces continuing challenges in meeting its current estimate. Thus, we believe that strong congressional oversight of the development process is necessary to help ensure that OFHEO’s plan to complete the risk-based capital standards is accomplished as quickly as feasible. closely related to enterprise risks by developing its own sophisticated stress test and associated financial modeling capability. We note that OFHEO’s approach has, ultimately, involved a substantial development period and commitment of resources. OFHEO encountered delays in obtaining accurate financial data from the enterprises. Beginning in 1994, OFHEO officials requested that the enterprises provide large amounts of historical and current financial data so it could do the work necessary to develop the stress test. According to OFHEO officials, the enterprises did not always provide all of the necessary data, or they provided data that may have been inaccurate. These problems persisted into 1996 and impeded development of the stress test, according to OFHEO officials. In response, Fannie Mae officials said that OFHEO’s data requests were burdensome and would have been less extensive if OFHEO had used a simpler approach to develop the stress test. The Fannie Mae officials said that a more simplified approach would have resulted in appropriate risk-based capital standards and could have been completed faster than OFHEO is currently taking to develop its stress test. Freddie Mac officials said they have tried to assist OFHEO in developing the stress test and that inaccurate data submissions have not been responsible for the delays. OFHEO experienced significantly greater technical and managerial challenges and associated delays than initially anticipated in developing an integrated financial model. This model—which is referred to as the Financial Simulation Model is to serve as the foundation of the stress test—is designed to simulate the behavior of the enterprises’ assets, liabilities, and off-balance-sheet obligations under adverse credit and interest rate scenarios. According to an OFHEO official, OFHEO had largely completed the model by April 1997, although some final testing and software documentation work remained. final rules while protecting proprietary enterprise data from unauthorized disclosure. Given OFHEO’s history of consistently underestimating the time necessary to complete the stress test and risk-based capital standards, we believe congressional oversight appears necessary to ensure that OFHEO completes the process as soon as possible. Accordingly, we recommended that OFHEO report periodically to Congress on the organization’s progress towards compliance with the plan. We further recommended that OFHEO inform Congress of any problems that may arise in completing the process by 1999, as well as corrective actions that the organization planned to address such problems. In the absence of a stress test and risk-based capital standards, OFHEO’s primary means of helping to ensure the safety and soundness of the enterprises is its examination program. However, OFHEO has not fully implemented the detailed examination schedule and plan that it established in 1994, which limits the organization’s ability to monitor the enterprises’ financial condition. We believe that limited resources allocated to the examination office as well as staff attrition contributed to OFHEO’s inability to fully implement the 1994 plan. Beginning in 1998, OFHEO plans to restructure its examination program so that it assesses all enterprise core risks annually. OFHEO established an examination plan in September 1994 that provided for a 2-year cycle for the assessment of six “core” risks, such as interest rate and credit, facing the enterprises. Although OFHEO identified six core risks, the plan stipulated that examiners were to cover these risks in five examinations—four examinations would each cover one core risk while another examination would cover two risks. OFHEO’s examination plan was similar in substance but not in timing to risk-focused examination plans that the Office of the Comptroller of the Currency and the Federal Reserve System have established to monitor the activities of large commercial banks. As required by law, the bank regulators are to conduct full-scope examinations of large commercial banks annually. As of May 1997, OFHEO had completed or initiated examinations covering five of the six core risks facing the enterprises. However, OFHEO’s current 3- to 4-year cycle for assessing the six core risks is considerably longer than the 2-year cycle established in the plan. In addition, OFHEO has scaled back the planned coverage of its most recently completed core risk examination; the examination covered only one of four business areas. OFHEO’s 3-to 4-year examination cycle and limited examination coverage raise questions about the organization’s ability to fully monitor the enterprises’ financial activities and risks. In particular, with its current examination schedule, OFHEO may not be able to do another on-site examination of the enterprises’ interest rate risks until 1999 or 2000, even though such risks may have increased because of increased holdings of debt-financed mortgage assets, since the previous core risk examination that addressed interest rate risk was completed in 1996. decision to commit virtually its entire staff of line examiners to each core risk examination for 1 year and the significant attrition the examination office has experienced have contributed to OFHEO’s inability to fully implement its 2-year examination cycle. OFHEO officials said that another important factor that has contributed to OFHEO’s inability to fully implement the 1994 examination plan was the amount of time that OFHEO examination staff needed to develop an understanding of the enterprises’ operations and risk management. Prior to 1993 when OFHEO began operations, the enterprises had not been subjected to an examination oversight program. OFHEO officials said that the first round of examinations has taken longer than initially anticipated in 1994 because of the amount of time necessary to obtain basic information about the enterprises’ operations and risk management practices. During the course of our audit work, OFHEO officials told us that the organization plans to reassess its examination program during 1997 and implement an annual examination cycle for all core risks by early 1998 to ensure that the enterprises’ safety and soundness is adequately monitored. The OFHEO officials also said that the reassessment is to include a review of examination office staff resources to ensure that an annual examination cycle can be implemented. OFHEO’s acting director also said that OFHEO may have some flexibility to increase its examination staff resources by shifting staff from its research activities as the stress test and risk-based capital standards are completed. We stated in our report that, without a reassessment of and potential reallocation of resources, OFHEO may not be able to implement an annual examination cycle by early 1998, since it had not fully implemented a 2-year cycle with existing examination office resources. In fact, as of June 1997, OFHEO had not initiated important components of the 1994 plan, such as one of the core risk examinations. Thus, we recommended that OFHEO include in the reassessment an analysis of the staff resources necessary to carry out alternative examination schedules, such as 1 or 2 years. Through such an analysis, OFHEO could help ensure a fuller consideration of the trade-offs associated with examination coverage provided versus costs involved and thereby engage in a more informed decisionmaking process. Senior OFHEO officials recently told us that they are in the process of reviewing examination office resources, and have decided to reallocate two positions from other offices to the examination office. Thus, the officials said the examinations office will have a total of 14 line examiner positions, rather than 12, and 19 positions overall. In addition, the director of OFHEO’s examination office told us that OFHEO plans to make greater use of bank regulatory detailees, than has been the case in the past, to also help ensure the effective implementation of the annual examination cycle by early 1998. Nevertheless, given OFHEO’s past difficulties in implementing its enterprise safety and soundness examination responsibilities, we believe that OFHEO’s future efforts, including the implementation of its annual examination cycle, should be closely monitored. I would like to conclude by reiterating that OFHEO has a crucial role in helping to maintain the safety and soundness of Fannie Mae and Freddie Mac and thereby ensuring that the enterprises can continue to meet their housing mission without posing unnecessary risks to taxpayers. As a relatively new federal regulatory organization with complex responsibilities, OFHEO has faced considerable challenges in implementing its statutory safety and soundness requirements. Among its accomplishments, OFHEO has assembled a professional staff that appears to have considerable expertise in housing economics, mortgage finance, computer systems analysis, and financial institution examinations. Although the development process has been slow, OFHEO has developed a working financial model that it believes will serve as the basis of the stress test and OFHEO plans to complete the final risk-based capital rule by 1999. However, given the challenges that remain in meeting this schedule, as well as OFHEO’s efforts to implement an annual examination cycle during 1998, we believe that continued strong congressional oversight of OFHEO’s progress is essential. Mr. Chairman, this concludes my statement. My colleagues and I would be pleased to respond to any questions that you may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the operations of the Office of Federal Housing Enterprise Oversight (OFHEO) and the status of OFHEO's efforts to fulfill its mission of helping to ensure the safety and soundness of the two largest government-sponsored enterprises, Fannie Mae and Freddie Mac. GAO noted that: (1) OFHEO has not fully implemented its statutory safety and soundness responsibilities for Fannie Mae and Freddie Mac, and faces considerable future challenges in doing so; (2) OFHEO does not expect to complete a stress test and risk-based capital standards for Fannie Mae and Freddie Mac until 1999, though it was to be completed by December 1, 1994; (3) OFHEO has not fully implemented a comprehensive and timely enterprise examination program, resulting in its limited ability to lower the long-term financial risks to taxpayers associated with the enterprises' activities; (4) GAO has identified a number of reasons for OFHEO's inability to comply with the statutory deadline for completing the stress test and risk based capital standards; (5) GAO believes that strong congressional oversight of the development process is necessary to ensure that OFHEO completes the process as quickly as feasible; (6) OFHEO has not been able to fully implement an enterprise examination schedule that it established in 1994, has taken 3 to 4 years to examine the major risks facing the enterprises, and reduced the planned coverage of the most recently completed risk examination; (7) among other factors, limited resources allocated to the examination office and staff attrition contributed to OFHEO's inability to fully implement the 1994 plan; (8) OFHEO officials said that they planned to reassess the examination cycle and implement an annual examination cycle by early 1998 to cover all enterprise risks; (9) without a reassessment of resource requirements and potentially a reallocation of resources to the examinations office, OFHEO may not be able to fully implement an annual examination cycle; and (10) although Fannie Mae and Freddie Mac have been consistently profitable in recent years, GAO believes it is essential, given the enterprises' outstanding financial commitments of approximately $1.5 trillion at year-end 1996, that OFHEO implement its safety and soundness responsibilities as quickly as feasible.
Unlike other U.S. school districts, DCPS, due to its location in the nation’s capital, has a unique administrative environment. Because Washington, D.C., is not located in a state, DCPS does not benefit from the oversight and assistance often provided by states. Furthermore, recent organizational changes in both the city and its school system have changed administration of the schools. To reform the District’s school system, the Congress recently passed the District of Columbia School Reform Act of 1995, which includes requirements for counting District students. Counting student enrollment, a process involving several interconnected elements, is usually fundamental to assessing funding needs and required of most other U.S. school districts. DCPS’ enrollment count process in school year 1996-97 was centered in the local schools and modified somewhat to address criticisms. DCPS lacks the state-level oversight that most other school districts in the country have. The state’s role in school operations is an important one. States generally provide guidance to their school districts on important issues, including student enrollment counts. The state determines the rules to be used in the enrollment count—who should be counted, by what method, and when. States also distribute state and federal funds to their districts, usually on the basis of enrollment, and they routinely audit various school district operations, including the enrollment count. The governance of DCPS had been performed for many years by an elected Board of Education. In November 1996, however, the specially appointed District of Columbia Financial Responsibility and Management Assistance Authority (Authority) declared a state of emergency in DCPS and transferred DCPS management—until June 30, 2000—to the Authority’s agents, a nine-member, specially appointed Emergency Transitional Education Board of Trustees. In so doing, the Authority transferred to the Board of Trustees “. . . all authority, powers, functions, duties, responsibilities . . .” of the former Board of Education (with some exceptions not relevant to this report). Meanwhile, the Authority also replaced DCPS’ superintendent with a Chief Executive Officer/ Superintendent. These changes have resulted in a shift of control from elected officials toward those appointed for a specific purpose: to reform the system. Early reform initiatives have included the administrative reorganization of DCPS and the closure of 11 schools. Even before the Authority’s takeover of DCPS, the Congress, relying on its plenary power to legislate for the District of Columbia, acted directly to reform DCPS. In April 1996, the Congress passed the District of Columbia School Reform Act of 1995, calling for the calculation of the number of students enrolled in DCPS. The law requires the District of Columbia Board of Education to do the following: calculate by October 15 of each year the number of students enrolled in the District’s public schools and students whose tuition in other schools is paid by DCPS funds, including students with special needs and nonresident students, in the following categories by grade level if applicable: kindergarten through grade 12, preschool and prekindergarten, adult students, and students in nongrade level programs; calculate the amount of fees and tuition assessed and collected from nonresident students in these categories; prepare by October 15 and submit to the Authority, the Comptroller General of the United States, appropriate congressional committees, and others an annual report summarizing those counts; and arrange with the Authority to provide for the conduct of an independent audit of the count. Within 45 days of the Authority’s receipt of the annual report—or as soon thereafter as is practicable—the Authority is to submit the independent audit report to the appropriate congressional committees. The requirement to count students is common to most other U.S. school districts. Forty-one of the 50 states use some type of direct student count to assess resource needs and to distribute state funds to their school districts. Enrollment counts also usually determine budgets and resource allocations to the individual schools. Three basic methods are used for counting enrollment. One method— called Enrolled Pupils (often called ENR)—counts all enrolled students on a specified day of the year. Definitions of “enrolled students” vary among districts, but they usually include elements of attendance. That is, students must be in attendance at least once during some preceding time period. ENR is used by 12 states and the District of Columbia. Another similar method is called Pupils in Average Daily Membership (often called ADM). This method, used by 22 states, calculates the average of total enrollment figures over a specified time period. A third method, called Pupils in Average Daily Attendance (often called ADA), calculates the average total daily attendance over a specified time period. Seven states use this method. Enrollment counts may occur several times throughout the school year in response to both state and local information needs and may use different counting methods depending on the purpose of the count. For example, officials in one district reported that they perform a count about 5 days after school opens, using the ENR method. The district uses this count to make final adjustments to school-level resource allocations for the current school year. On September 30, the district conducts the first of three state-required enrollment counts, also using the ENR method. The state uses this count to assess compliance with state quality standards (such as pupil/teacher ratios) and to estimate enrollment before the March 31 count. On March 31, the district conducts the second state-required count, this time using the ADM method. The state uses this count to distribute state funds. Finally, the district conducts the third state-required enrollment count at the end of the school year, also using the ADM method. The state uses this count as a final report on enrollment for the entire school year. In addition to fulfilling reporting requirements, the school district uses the state-required enrollment counts for local planning and monitoring purposes. States vary in their approach to monitoring and auditing their districts’ enrollment counts. Some states do little monitoring or auditing of their districts’ counts, while others stringently monitor and audit. For example, one state simply reviews district enrollment reports for the fall and spring and contacts districts if large discrepancies exist. In contrast, another state not only conducts an electronic audit of its districts’ spring and fall official enrollment counts, but also visits districts and examines a random sample of student records in detail. School district officials in this state reported that the state withdraws from its districts state funds paid for students improperly enrolled or retained on the rolls. Regardless of when the count is performed or by what method, whether audited or not, accuracy is critical. A student count may be inaccurate if it has problems in any of at least three critical areas: enrollment, residency verification, and pupil accounting. Enrollment and residency verification take place when a student enters the school system. They determine a student’s initial eligibility and therefore who may potentially be included in the count. Pupil accounting refers to the tracking of students after initial enrollment. Monitoring student attendance, status, and transfers in and out of school are all part of pupil accounting, which often involves an automated student database. The pupil accounting system provides the basis for determining continued eligibility to be counted—based upon a student’s attendance—and it helps determine which school may count a particular student in its enrollment. Critics have often charged that the District’s reported official enrollment numbers have been overstated. One reviewer asserted, for example, that results of the 1990 U.S. census suggest that the District’s school-age population in 1990 might have been as much as 13,000 less than DCPS’ official enrollment count. Subsequent reviewers, including a certified public accounting firm, the Office of the District of Columbia Auditor, and us, examined the process that DCPS used to count pupils in school years 1994-95 and 1995-96 and found flaws. These flaws included DCPS’ lack of documentation to support enrollment status and lack of sanctions if false enrollment information was provided. These reviewers also reported that DCPS lacked adequate procedures to verify residency and that the student database had errors, including duplicate records, incomplete transfers, and incorrect enrollment status. For a more detailed discussion of audit findings and recommendations, see appendix II. DCPS’ process for enrolling, verifying residency of, and tracking students remained centered in the local school in school year 1996-97, while central office staff monitored portions of the process. To respond to past criticisms, DCPS instituted some changes for school year 1996-97, including new forms, residency verification procedures, and additional preparatory counts. The actual official enrollment count was done manually, and school principals were ultimately responsible for ensuring the accuracy of their schools’ counts. DCPS’ local schools conducted all enrollment activities in school year 1996-97 for new and returning students, and the schools’ principals made all determinations about enrollment eligibility. Principals were allowed to enroll students who lived outside school boundaries without limitation. Principals could also temporarily enroll students who had not provided evidence of meeting eligibility criteria, including health certificates and proofs of District of Columbia residency. Upon completion of initial paperwork, the schools’ data entry clerks created an electronic record for each newly enrolled—or temporarily enrolled—student in SIS. The system maintained records for returning students from the previous school year, and the records were updated during the summer with promotion information. Similarly, withdrawals were processed during the summer, and these records were removed from the schools’ rolls. Figure 1 shows the enrollment count process for school year 1996-97. The process in school year 1996-97 incorporated the use of a new enrollment card designed to address auditors’ concerns about validating enrollment status. Students were to complete two copies of the enrollment card on the first day of attendance, and teachers were to sign and certify the cards. A completed card was to serve as proof that a child had appeared the single day required to be considered enrolled. In addition to serving as proof of enrollment status, the card was to be used to update SIS. In addition to the enrollment card, DCPS’ enrollment process for 1996-97 required all students to provide evidence of District of Columbia residency. If the student provided no evidence, DCPS’ rules allowed the student to enroll, but the student was to be assessed tuition. Tuition for a full-time program for school year 1996-97 ranged from $3,349 to $7,558, depending on grade level. Providing evidence of District of Columbia residency was required as part of revised DCPS procedures for school year 1996-97 to answer critics who charged that DCPS’ process for verifying residency was inadequate. In previous years, only students entering DCPS schools for the first time would have been required to submit proof of residency. A new form, the Student Residency and Data Verification Form, which had been piloted at selected schools during the previous school year, was to be completed for all students during school year 1996-97. Students were expected to have their parents or guardians complete the form and return it to the school with proofs of residency attached. Schools were to give students 3 days to complete and submit the form and proofs. Within 10 days, the school was to provide one copy of the form to the Nonresident Tuition Enforcement Branch of the Central Office along with a list of those students for whom residency had not been verified. The Nonresident Tuition Enforcement Branch was responsible for assessing and collecting tuition. In addition to enrollment and residency verification procedures, local schools also tracked student attendance, status, and transfers in school year 1996-97. Each of DCPS’ schools had online access to school data, and the schools’ data entry personnel (or enrollment clerks) were responsible for ensuring data were accurate and up to date. The MIS Branch, however, in the Central Office, managed the overall database. Classroom or homeroom teachers took attendance once a day, and data entry staff recorded it in SIS. Transfers were often done electronically, with transfer procedures initiated by the losing school and completed by the gaining school, although a manual back-up transfer process was also available. Monitoring activities for school year 1996-97 focused exclusively on overseeing the schools’ implementation of the enrollment card and on identifying nonresidents. During the early part of the school year, DCPS’ Central Office staff visited each of the schools three times to monitor enrollment cards. Eighteen members of the Central Office staff were temporarily reassigned to monitor the cards. Staff paid the first monitoring visit within the first 2 weeks of school and focused on the extent to which schools were following the process, that is, distributing and completing enrollment cards and filing them in the appropriate locations. Staff paid the interim monitoring visit before the official enrollment count and manually tallied students, comparing the enrollment cards, SIS reports, and the preliminary count documents. Staff paid the final monitoring visit after the October 3 count and were to verify that names on the enrollment cards matched those on SIS homeroom rosters. Nonresident students of the District of Columbia were to be identified through local schools’ monitoring of the completed data verification forms. The Nonresident Tuition Enforcement Branch was to investigate cases the schools identified. In addition, staff from this branch were to visit the schools to survey cars transporting students to and from school, identifying all out-of-state license plates. The monitors were also to review enrollment cards and residency verification forms to determine if the forms indicated residency issues. The branch was to investigate all identified cases and assess tuition for students found not meeting the District’s residency requirements. As previously mentioned, for school year 1996-97, DCPS used the ENR method to count its students—counting all enrolled students on a single day—October 3, 1996. Students did not have to attend school on this day to be included in the count because enrollment records were counted— not actual students. DCPS defined an “enrolled student” as any student who had appeared at school at least once—and who had not withdrawn from DCPS—between the beginning of the school year on September 3, 1996, and October 3, 1996, the day of the count. DCPS’ October 3, 1996, count was conducted manually by each homeroom teacher using homeroom rosters prepared from SIS. School staff compiled the count, classroom by classroom, and recorded the numbers on the school’s official report. The Central Office received the schools’ reports, and schools’ data were aggregated by the Office of Educational Accountability (OEA), which prepared the official enrollment report.Each school’s principal was to ensure not only the accuracy of the school’s manual count, but also the enrollment, residency, and pupil accounting data that supported it. DCPS’ policy for the October 3, 1996, count called for unspecified rewards and sanctions to be applied on the basis of the extent to which staff maintained and reported accurate, up-to-date information. Beyond the official October count, DCPS also performed other counts throughout the year using this same process. These included official counts in December and again in February. The February count aided in computing projections for school year 1997-98. In addition to these counts, DCPS began two new preparatory counts this year. Each school took daily enrollment counts and communicated them by telephone to the Central Office every morning for the first 11 days of the school year. In addition, in September, each school completed a preliminary count using forms established for the official October 3 count. DCPS’ new student enrollment card was intended to document that students had met the 1-day attendance requirement for inclusion in the official enrollment count. Although the card may have met this requirement in some respects, it appears to have burdened both school and DCPS staff and may not offer much advantage over more traditional methods of documenting attendance, such as teachers’ attendance tracking. Perhaps even more importantly, the card alone did not ensure that enrollment records were correct before the count. The card did not address a critical problem—one revealed by prior audits—a lack of internal controls of the student database. This problem allowed multiple records to be created for a single student. Furthermore, DCPS continued to include in its enrollment some categories of students often excluded in official enrollment counts used for funding purposes in other states. In contrast to DCPS procedures, officials in other school districts reported using various strategies for ensuring accuracy and minimizing duplicate records. Teachers and school staff reported that DCPS’ new enrollment card was burdensome and difficult to implement. Each child, on the first day of attendance, had to complete and sign two separate copies of the card. However, many students—primarily the very young, disabled, or non- English speaking—could not complete the card themselves because they could not read or write at all or do so in English. In these cases, teachers had to complete the enrollment cards, although the students were asked to sign the cards when possible. Teachers, particularly in the primary grades, reported that completing the cards was troublesome for them, adding to their paperwork burden. Furthermore, the legitimacy of a child’s signature as a method of validation—particularly when the child cannot read or write—is questionable. In addition, the enrollment card did not contain vital enrollment information needed by the schools, such as emergency contact numbers. Consequently, it could not substitute for other enrollment forms that schools had been using. Several of the schools we visited augmented the enrollment card with other forms to obtain needed information. Consequently, the busy school staff had to complete and manage multiple forms to collect and maintain basic enrollment data. Moreover, the procedures that DCPS established for completing the enrollment card were difficult to implement after the first days of the school year. The procedures, which required the teacher to certify the student’s signature, were designed for the initial few days of school when an entire class enrolled together and could complete the form in the teacher’s presence. No provision had been established for students arriving later, who normally enroll at the school office. School staff in the schools we visited reported that they could not sign the card for the teacher, and obtaining the teacher’s signature and certification for these late enrollments was sometimes difficult. As a result, the process sometimes failed when enrollment cards for late enrollees were not completed or signed and certified by teachers. Finally, DCPS officials reported that Central Office monitoring for implementation of the new enrollment card was labor intensive. Enrollment card monitoring efforts did not use statistical sampling. Instead, we were told, monitors visited all the schools on three separate occasions, often reviewing 100 percent of the enrollment records. To perform this task, monitoring teams were formed, without regard to their normal responsibilities, from available staff within the former OEA, according to DCPS officials. During our review, we could not confirm the extent of these enrollment card monitoring visits because DCPS could not provide us with any of the monitoring reports prepared on the basis of these visits. The procedures that DCPS used for enrolling students in school year 1996-97 allowed multiple records to be entered into SIS for a single student. When school staff entered a new record, the SIS processing procedure automatically queried the database for any matching names and dates of birth. If a match occurred—as would be the case if the student had previously enrolled in a DCPS school—SIS informed the person entering the data that a record already existed for an individual with that name and date of birth. SIS, however, also provided the option of overriding the system and creating a new record for the student. DCPS officials reported that some data entry personnel were choosing this override capability and creating the new record. With safeguards overridden and additional records created, two schools could have each had access to a separate record for the same individual, allowing both schools to count the student. DCPS’ mechanisms for resolving this error were limited. Although Central Office MIS personnel maintained SIS, they had no authority to correct the errors once detected. Only the local school had such authority. MIS personnel had limited influence over the schools to ensure that corrections were made quickly and accurately, according to DCPS officials. Furthermore, while duplicate record checks were done, officials told us, the checks were not done on a regular, routine schedule. In addition, individuals who had helped with data quality control in the past as well as those who had monitored attendance were moved in early 1997 to facilities without office telephones or data lines. DCPS’ practice of allowing schools to enroll, without restriction, students who live outside school attendance boundaries increased the possibility of a student’s having multiple enrollment records for school year 1996-97. Students did not have to enroll in the school serving the geographic area where they lived but could enroll in any DCPS school if the principal allowed. For example, a student could have gone first to the school serving his or her area, filled out an enrollment card, and been entered into SIS. Subsequently, the student may have gone to another school, filled out another enrollment card, and—if the person entering this record in SIS chose to override the safeguard—been entered into SIS a second time. In addition, some principals reported that schools actively sought to attract out-of-boundary students to increase their enrollment. DCPS’ official enrollment count of 78,648 included not only regular elementary and secondary students, but also other categories of students excluded from enrollment counts in other districts when the counts are used for funding purposes. For example, DCPS included in its enrollment count students identified as tuition-paying nonresidents of the District of Columbia and students above and below the mandatory age for public education in the District, including Head Start participants,prekindergarten students (age 4), preschool students (age 0 to 3), and some senior high and special education students aged 20 and older. In contrast, the three states we visited reported that they exclude any student who is above or below mandatory school age or who is fully funded from other sources from enrollment counts used for funding purposes. Furthermore, even though the District of Columbia Auditor has suggested that students unable to document their residency be excluded from the official enrollment count, whether they pay tuition or not, DCPS included these students in its enrollment count for school year 1996-97. In contrast with the DCPS process, students in the Boston and Chelsea, Massachusetts, school districts enroll at central Parent Information Centers (PIC), which are separate and independent from the schools, officials told us. Individual schools in these two districts cannot enroll new students, we were told. All enrollment activities, including assignment of all students to schools, take place at PICs. Boston’s PICs were established as a key part of the U.S. District Court’s desegregation plan to alleviate the Court’s concerns about the accuracy of Boston’s reported enrollment numbers and to satisfy the Court’s requirements for credibility and accountability in pupil enrollment, assignment, and accounting. Centralizing student enrollment at PICs has helped reduce errors, according to officials in both districts. For example, staff in Boston have specialized in and become knowledgeable about the process. Limiting access to the student database has also helped to reduce errors. For example, in Boston, only six people may enter data into the database. Furthermore, PICs prevent students from being enrolled at two or more schools simultaneously, reducing duplicate counting and preventing schools from inflating their enrollment. In the other four districts we visited, schools—rather than a central site—usually handle student enrollment, but they use other safeguards. To enroll, a student goes to the school serving the geographic area in which he or she lives. Out-of-boundary enrollment is not usually allowed.In addition, officials in all four of these districts reported having student database safeguards to aid enrollment accuracy. For example, all four districts have procedures and edits in their student databases that automatically block the creation of duplicate enrollment records. If an enrolling student has attended another school in the district, these procedures will not allow a new record to be created once the old record has been located. School staff, officials told us, cannot override this blocking mechanism. In addition, Prince George’s County has a procedure in its student database that automatically checks student addresses with school attendance boundaries as enrollment information is entered. If the address falls outside the enrolling school’s boundaries, the database blocks enrollment. During school year 1996-97, District of Columbia schools had features that attracted nonresidents. Elementary schools in the District had free all-day prekindergarten and kindergarten, and some elementary schools had before- and after-school programs at low cost. One school we visited had before- and after-school care for $25 per week. This program extended the school day’s hours to accommodate working parents—the program began at 7 a.m. and ended at 6 p.m. In addition, several high schools had highly regarded academic and artistic programs; and some high schools had athletic programs that reportedly attracted scouts from highly rated colleges. Furthermore, students could participate in competitive athletic programs until age 19 in the District, compared with age 18 in some nearby jurisdictions. DCPS established new procedures for school year 1996-97 to detect nonresidents and collect tuition from those who attended DCPS schools, but both school and Central Office staff failed to implement the new procedures completely. In addition, DCPS failed to monitor and enforce its new procedures effectively. Most of the schools we visited failed to comply with the new residency verification process. As discussed previously, all students’ parents or legal guardians had to complete a Student Residency and Data Verification Form (residency form) and provide at least two proofs of residency. Students were told that failure to provide either the completed residency form or proofs would result in an investigation of their residency, and, if appropriate, either tuition payments or exclusion from DCPS. Most of the schools we visited, however, did not obtain completed residency forms for all their students. In fact, only 2 of the 15 schools had—or reported having—residency forms for 100 percent of the student files we reviewed. In addition, schools did not collect all required proofs of residency. Students and their families presented two proofs of residency in only isolated cases, and many students submitted no proofs. In many other cases, the proofs that the schools collected did not meet the standards established by DCPS and printed on the residency form. Although the residency form specified proofs of residency, such as copies of deeds, rental leases, utility bills, or vehicle registrations, as acceptable, schools sometimes accepted proofs such as newspaper or magazine subscriptions, copies of envelopes mailed to the student’s family, stubs from paid utility bills with no name attached, and informal personal notes (rather than leases or rental agreements) from individuals from whom the family reportedly rented housing. We also found some instances in which the names or addresses on the proof did not match those on the form. School staff often complained to us about the difficulty they had trying to get students to return completed residency forms and proofs. Some acknowledged that they placed little emphasis on this effort. Schools we visited also varied in their compliance with the requirements to report residency issues to OEA. Schools were supposed to forward copies of all students’ completed residency forms to OEA. These copies were to be attached to a list of students whose residency was considered questionable. Some schools sent copies of their student residency forms along with the list as required. Others sent the proofs with the forms. At least six schools sent no verifications of residency to the Central Office. Some of these implementation issues may have resulted from poorly specified requirements and procedures. For example, though DCPS officials reported to us that the requirements were for at least two proofs of residency, we found no written documentation communicating to the school staff or to the students a requirement for more than one proof. DCPS officials also gave us conflicting information about the number of proofs required. At one meeting, we were told that three proofs were required; at a later meeting, that two to three were required. Similarly, DCPS’ guidance to the schools did not specify how the schools were to maintain their students’ completed residency documentation—or even exactly what documentation was to be maintained. Consequently, schools’ maintenance of residency documents varied considerably. For example, about one-third of the schools we visited maintained the residency forms alphabetically; the remaining schools grouped them by classroom. The schools’ disposition of the proofs of residency varied even more. Eight schools filed proofs of residency with the students’ completed residency forms; one filed the proofs in the students’ permanent (cumulative) record folder; one filed them either with the completed form or in the folder; one placed all proofs in a file drawer without annotating them to permit subsequent identification of the student to whom they belonged; two forwarded all proofs to OEA, along with copies of the completed form; and two schools had no proofs at all for the student records we reviewed. And, because procedures did not provide for the schools to document the proofs on the residency forms, schools not retaining the proofs with the forms could not demonstrate that they had adequately verified residency. Other audits of schools’ compliance with residency verification would face similar obstacles because of the schools’ inability to link student records with proof of residency. Monitors for student residency, in general, did not report the level of school and student noncompliance that we observed in our review. For the nine schools for which we could directly assess compliance, with few exceptions, proofs of residency were missing for large portions of the student population. But, most DCPS Daily Activity Reports (monitoring reports) failed to cite the missing proofs, focusing instead on students who lived with someone other than a parent or whose forms indicated a nonresident address or phone number. For example, in one school we visited, we determined that about one-fourth of the students (or 108) did not return a proof to the school. The DCPS monitoring report, however, identified only one student living with a grandmother and two students with nonresident addresses. In another school, we found no proofs, and staff reported that they could not get students to provide proofs. But the monitoring report showed that only two students had nonresident addresses or phone numbers. Moreover, DCPS officials did not provide monitoring reports for 3 of the 15 schools we visited, telling us that it only prepared monitoring reports for schools where issues of nonresidency had been identified on enrollment cards or residency verification forms. At one of the three schools without monitoring reports, we found no proofs of residency on file for any student. Some of the monitors’ failure to detect and report residency problems may have resulted from poorly specified guidance. Instructions to monitors were not specific enough to guide implementation, for example, asking monitors to identify students for whom parents had not “sufficiently documented” residence. Monitoring instructions did not specify what to examine to determine whether residency was documented or what documentation was considered sufficient. Furthermore, despite recommendations of previous audits, monitors had no instructions to review the files to determine whether students had submitted a residency form. Consequently, when monitors failed to compare names on the student roster with those on completed residency forms, DCPS missed a key element in determining school and student compliance. We found forms missing for at least some of the students at 13 of the 15 schools we visited. At one school, the staff estimated that about 25 to 30 percent of the students did not return the residency forms, and, at another school, the staff could not find about one-third of the forms. Despite monitoring efforts and threats of sanctions, DCPS administration did not ensure that the schools completed the residency verification procedures. DCPS conducted no follow-up of schools failing to submit the office copy of the residency form. In addition, on the basis of the reports from the schools we visited, it conducted only minimal follow-up of schools failing to collect adequate proofs. Furthermore, as noted earlier, DCPS conducted no follow-up of those schools failing to collect residency forms for all students because no one in the Central Office checked to see if all forms had been received. In addition, the Central Office did not consistently apply the established sanctions to the students or their families for failing to submit forms or proofs. As noted earlier, parents and guardians were told that failure to provide proof could result in an investigation, a tuition bill, or exclusion from DCPS. On the basis of our visits to 15 schools, we assessed the degree of student noncompliance as very high. In one school alone, staff estimated that about 80 percent of the students—or about 700 students— did not comply. Yet, for all 158 schools, the Nonresident Tuition Enforcement Branch reported that, as of May 1, 1997, it issued only 469 letters to students requesting them to submit proofs of residency, collected tuition from only 35, and excluded only 156 students from DCPS schools. Action was pending for another 136. DCPS officials in the Nonresident Tuition Enforcement Branch told us that, at the request of one of the assistant superintendents, they were focusing their enforcement action mainly on high school athletes largely because the athletic program may have been attracting nonresidents. Like DCPS, all the other districts reported that all new students must verify residency upon enrolling. Residency verification occurs either at the individual schools or at central service centers. Officials in Boston and Chelsea reported that the PICs verify residency. Officials in the other four districts told us that all or most new students enroll and verify residency at the school they will attend. School staff verify residency and check to see that the student’s address falls within the attendance boundary of the school. If the parent fails to provide satisfactory proof of residency, the child is not allowed to enroll. Other districts reported relying upon the schools to verify residency for continuing students. For example, officials in Arlington, Fairfax, and Prince George’s counties told us that teachers and principals are expected to monitor continually for students’ possible relocation, and students must provide information on address changes. Schools also often make use of returned mail as a reliable data source for address changes. None of the other districts we visited requires annual residency verification for all students as DCPS does. The foundation of the pupil accounting system—SIS—lacked adequate safeguards to ensure that students were accurately tracked when they transferred from one school to another. Furthermore, some schools did not follow attendance rules, affecting later counts and projections. These rules, if implemented, may have allowed some students who no longer attended to be included in the school’s count. The student transfer process may have allowed a single student to be enrolled in at least two schools simultaneously. During most of the school year, a student’s record could be accessed and modified only by the school in which the student was enrolled. When a student transferred, however, the losing school was to submit the student’s record to a computer procedure that allowed both the losing and gaining school to have identical copies of the student’s record. During this process, both schools could enter the student’s status as “active” or “inactive.” The computer procedure provided no safeguards to ensure that the student was only active at one school at a time. Until the losing school completed the computer procedure with a withdrawal code, both schools could have claimed the student as active or enrolled. The possible impact of this vulnerability upon the count may have been sizeable. DCPS officials reported that the number of transfers between schools in the District during school year 1996-97 was well in excess of 20,000. DCPS officials in the MIS Branch, concerned with this problem, performed periodic data runs to detect cases in which students were shown as enrolled in two schools. Resolving these issues and completing the transfers, however, sometimes involved a lengthy delay. We found cases that took as long as 1 to 2 months to resolve. Local schools made all changes—the MIS Branch did not have authority to change the data—and some school staff did not use the electronic transfer procedures. Furthermore, DCPS did not specify a time limit for completing the transfer. In addition, students could also be counted at more than one school when the massive transfers took place at year end during “roll-over”—when students transferred as a group to either middle or high school. During school year 1996-97, well over 6,800 roll-overs took place, and the process was multistaged and generally occurred when students were still enrolled in the elementary or middle schools, officials said. SIS has a programming anomaly allowing students to have active status in both schools’ databases, according to DCPS officials. Sometimes students were legitimately enrolled in two schools simultaneously, for example, when attending a regular high school program in addition to one of the School-to-Aid-Youth (STAY) programs. In these cases, the database of the school with the secondary program— STAY—should have shown the student with the special status of “enrolled” and the student’s regular school should have shown his or her status as “active.” The student should have only been counted at the school where active. School clerks did not use the “enrolled” code properly, however, and, because the status code had no safeguards, the student could be counted at both schools, according to DCPS officials. During school year 1996-97, two attendance rules directly affected student status and therefore the number of students eligible to be counted. First, schools were to reclassify as inactive, or in this case as a “no-show,” any student expected to enroll but not actually attending school at least once during the first 10 days of school. Students classified as inactive would not be included in the official enrollment count. No-shows, however, were sometimes not reclassified as inactive as required by the attendance rules. While most schools we visited appeared to be following this rule, at least one school we visited apparently had difficulty changing these students’ status to inactive. At this school, the data entry staff reported that they were having trouble maintaining student status as “inactive” for the no-shows. Some of these students were appearing on their active rolls as late as February, possibly affecting DCPS’ official count. Second, schools were required to change to inactive status those students who showed up for at least 1 day but subsequently accumulated 45 consecutive days of absences. For students who had 45 days of absences, schools reported that they only rarely changed their status to inactive. School officials often told us that they did not change a student’s status unless they could obtain accurate information about the student’s whereabouts, confirming that the student should be dropped from the rolls. School administrators stated reluctance to “give up on a student,” and they viewed changing the student’s status to inactive as such. Unlike the no-show rule, failing to implement the 45-day rule would not have directly affected the October count. It would have affected, however, subsequent counts and the accuracy of projections from them. The 45-day attendance rule, if implemented, may have allowed some nonattending students to be considered active and enrolled. The rule enabled any student who reported 1 day to be considered enrolled until evidence was obtained that he or she had transferred elsewhere or until 45 days had elapsed. If a student went to another school district without notifying the school, the school would not have known to drop the student from its rolls. Consequently, even if the student appeared only on the first day of school, the 45-day time period would not have expired before the official enrollment count, allowing a student to be counted who no longer attended a DCPS school. This 45-day time period might be considered lengthy by some other nearby districts. Other school districts we visited reported that they have shorter time periods. For example, Virginia law requires that students with 15 or more consecutive days of absence be withdrawn from school, district officials told us. Therefore, neither Arlington County nor Fairfax County counts any student with 15 or more days of consecutive absence. Neither does Boston count any student in this category. SIS provided no safeguards to ensure that the schools followed either the no-show rule or the 45-day rule. It had no feature that would allow students’ status to be automatically changed to inactive on the basis of absences. Nor could SIS identify students with 45 consecutive days of absence—it does not readily permit calculating consecutive days of absence for students throughout the school year. Consequently, quality control or management assistance from the MIS Branch on this issue was not possible. Other districts we visited reported using essentially the same approaches for controlling errors in tracking student transfers as they use for controlling enrollment and residency verification. For example, in Boston, all student transfers take place through the PICs, where a limited number of staff may process the transfers. The schools lack the authority or ability to transfer students. In most of the other districts, officials reported that the individual schools handle student transfers. These districts rely on a variety of automatic edits and procedures in their student database systems to prevent such errors and serve as ongoing checks and balances on the schools. For example, in Arlington, Fairfax, Prince George’s, and Montgomery counties, the student database systems either do not allow a transfer to proceed unless the losing school removes the student from its rolls or automatically removes the student from the losing school as part of the transfer process. The school cannot override these safeguards. In addition, Arlington, Fairfax, and Prince George’s counties reported using two centralized oversight mechanisms for further enhancing accuracy in accounting for student transfers. First, they regularly and frequently check their student databases for duplicate student entries using students’ names and dates of birth as well as identification numbers. These checks also help to safeguard against multiple student entries arising from other sources such as enrollments. Arlington County performs this check every 15 days; Fairfax County, every 2 weeks; and Prince George’s County, daily concerning transfers. Second, if these districts identify duplicates, they notify the school immediately and work with the school to resolve the situation, officials reported. For example, Prince George’s County reports duplicates from transfers to the schools every day; when school staff log onto the computer system in the morning, the first thing that appears is an error screen showing duplicates from transfers as well as any other errors. Prince George’s County officials also review these schools’ error screens and follow up daily. If schools do not respond, according to these officials, database management staff can readily access senior district officials to quickly resolve such problems. In addition, in Arlington, Fairfax, and Prince George’s counties, Boston, and Chelsea, the database staff may make changes to the student database. As in DCPS, all six of the districts we visited reported to us that teachers are responsible for tracking daily attendance and schools for recording attendance data in the student database. Most of the other districts reported that they also use their central student databases to track all student absences as a check on the schools’ tracking. In addition, several districts withdraw students from school after substantially fewer days of consecutive absences than DCPS. For example, in Boston and Arlington and Fairfax counties, students absent 15 days in a row are withdrawn from school. They are therefore not included in school or district enrollment counts. These students must re-enroll if they return. The District of Columbia School Reform Act of 1995 imposed enrollment count reporting and audit requirements upon DCPS, the District of Columbia Board of Education—all of the responsibilities of which have been delegated to the Board of Trustees—and the Authority. The Reform Act requires the District’s schools to report certain kinds of information. The schools did not collect all the information required to be reported, and the official enrollment count that was released did not comply with the Reform Act’s requirements. In addition, the Reform Act requirements to independently audit the count have not been met. The Reform Act requires an enrollment count that includes—in addition to data historically reported by DCPS—a report of special needs and nonresident students by grade level and tuition assessed and collected. The official enrollment count report released for school year 1996-97—the first year of the new reporting requirements—failed to provide information on special needs and nonresident students as well as on tuition assessed and collected. DCPS has not provided any evidence that additional documentation was released that would include the required information. Despite October 1996 correspondence from the U.S. Department of Education referring them to the law, DCPS officials repeatedly expressed to us unfamiliarity with the law or the type of information it requires. The Reform Act also stipulates that the Authority, after receiving the annual report, is to provide for the conduct of an independent audit. The Authority, however, had delegated this function to DCPS earlier this year, according to DCPS procurement officials. With that understanding, DCPS’ Procurement Office, with technical assistance provided by the U.S. Department of Education Inspector General’s Office, issued a Request for Proposals (RFP). DCPS received proposals in response, and, in early June 1997, the Procurement Office was preparing to make an award. When we queried Authority officials at that time about their role in this effort, however, they reported that they did not know of any DCPS efforts to procure the audit and were preparing to advertise an RFP for the audit. Subsequent correspondence from the Authority indicated that the inadequacies that led to the restructuring of the public school system would make auditing the count counterproductive. In addition, the Authority’s comments in response to our draft report reiterated its notion that auditing the flawed count would be counterproductive. In short, the Reform Act’s requirements to count and report student enrollment and audit that enrollment count have not been met. Although DCPS has tried to respond to criticisms raised by previous audits, its efforts have overlooked larger systemic issues. Consequently, fundamental weaknesses remain in the enrollment count process that make it vulnerable to inaccuracy and weaken its credibility. For example, the lack of internal controls allows multiple records and other errors that raise questions about the accuracy of the database used as a key part of the count. Furthermore, unidentified nonresident students may be included in the count when they avoid detection because DCPS’ sanctions are not enforced. An accurate and credible enrollment count demands a process with stringent accountability and strong internal controls. Moreover, the need to correct DCPS’ problems is more critical now than ever before. Current reform initiatives have heightened public awareness of the issues and increased scrutiny of the process. Meanwhile, new budget initiatives for per pupil accounting will increase this level of scrutiny. Even without the new initiatives, an accurate enrollment count is essential if DCPS is to spend its educational dollars wisely. Because the enrollment count will become the basis for funding DCPS, the Congress may wish to direct DCPS to report separately, in its annual reporting of the enrollment count, those students fully funded from other sources, such as Head Start participants or tuition-paying nonresidents; above and below the mandatory age for compulsory public education, such as prekindergarten or those aged 20 and above; and for whom District residency cannot be confirmed. We recommend that the DCPS Chief Executive Officer/Superintendent do the following: Clarify, document, and enforce the responsibilities and sanctions for employees in all three areas of the enrollment count process—enrollment, residency verification, and pupil accounting. Clarify, document, and enforce the residency verification requirements for students and their parents. Institute internal controls in the student information database, including database management practices and automatic procedures and edits to control database errors. Comply with the reporting requirements of the District of Columbia School Reform Act of 1995. We also recommend that the District of Columbia Financial Responsibility and Management Assistance Authority comply with the auditing requirements of the District of Columbia School Reform Act of 1995. DCPS’ Chief Executive Officer/Superintendent stated that DCPS concurs with the major findings and recommendations of the audit and will correct the identified weaknesses. He also acknowledged that the enrollment numbers for school year 1996-97 are subject to question for the reasons we cited— especially because the enrollment count credibility hinges almost entirely on the written verification provided by local administrators. No substantial checks and balances, no aggressive central monitoring, and few routine reports were in place. In addition, virtually no administrative sanctions were applied, indicating that the submitted reports were hardly reviewed. DCPS’ comments appear in appendix III. The Authority shared DCPS’ view that many findings and recommendations in this report will help to correct what it characterized as a flawed student enrollment process. Its comments did, however, express concerns about certain aspects of our report. More specifically, the Authority was concerned that our review did not discuss the effects of the Authority’s overhaul of DCPS in November 1996. It also commented that our report did not note that the flawed student count was one of the issues prompting the Authority to change the governance structure and management of DCPS as noted in its report, Children in Crisis: A Failure of the D.C. Public Schools. Although we did not review the Authority’s overhaul of DCPS or the events and concerns leading to that overhaul, we have revised the report to clarify the Authority’s transfer of powers and responsibilities from the District of Columbia Board of Education to the Emergency Board of Trustees. The Authority was also concerned about the clarity of our discussion of the District of Columbia School Reform Act, suggesting that we enhance this discussion to include the portion of the Reform Act that addresses the funding of the audit. We have clarified in the report that the relevant responsibilities of the Board of Education—including that of funding the audit—were transferred to the Emergency Board of Trustees. Finally, the Authority questioned statements made in our report about its role in preparing an RFP for an audit. Specifically, it disputes our statement that the Authority was “. . . unaware of any of DCPS’ efforts to produce the audit and were preparing to advertise an RFP for the audit.” In disputing our statement, the Authority asserts that this is a misrepresentation of a conversation between a new employee of the Authority who would have known nothing about the Authority’s contracting process and our staff. We disagree that this misrepresents our conversations with Authority staff. In preparing to meet with the Authority the first time, we spoke with a more senior, long-time member of the Authority’s staff about the audit issues who referred us to the new staff member as the expert on District education issues. When we met with the new staff member, she stated that she had reviewed the act and had spoken with other staff who were preparing to develop an RFP. Furthermore, after meeting with this new staff member, we met a second time with other Authority staff present. At both meetings, Authority staff expressed unfamiliarity with DCPS’ efforts to produce an audit. The Authority’s comments appear in appendix IV. The U.S. Department of Education, in commenting on our draft report, noted that its Office of Inspector General had no role in preparing DCPS’ enrollment count for school year 1996-97 but provided some clarifications about correspondence between it and DCPS regarding an audit of the count. We have revised the report where appropriate. Education’s comments appear in appendix V. We are sending copies of this report to the U.S. Department of Education; the Office of the Chief Executive Officer/Superintendent, District of Columbia Public Schools; the District of Columbia Financial Responsibility and Management Assistance Authority; appropriate congressional committees; and other interested parties. Please call Carlotta Joyner, Director, Education and Employment Issues, at (202) 512-7014 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix VI. We designed our study to gather information about DCPS’ enrollment count process for school year 1996-97 and the process used by other selected urban school districts. To do so, we visited DCPS administrative offices, interviewed administration officials, and reviewed documents. We also visited randomly selected DCPS schools unannounced, interviewing school faculty and staff and reviewing student records. In addition, we interviewed officials in other urban school districts, officials in the U.S. Department of Education and the District of Columbia, and other experts in the field. We did our work between October 1996 and June 1997 in accordance with generally accepted government auditing standards. We visited 15 randomly sampled DCPS elementary and secondary schools to review documents and interview faculty and staff about DCPS’ enrollment count process. We selected these schools from a list of 158 elementary and secondary schools provided to us by school district officials. We focused our review on regular elementary and secondary schools and excluded the two School-to-Aid-Youth (STAY) programs, two educational centers, and one elementary art center. Therefore, our final population included 153 schools. Fifteen schools were randomly selected by city quadrant (Northeast, Northwest, Southeast, and Southwest) and by level of school (elementary, middle/junior high, and senior high). Table I.1 shows the population distribution, and table I.2 shows the sample distribution for schools visited. We also interviewed officials in other selected urban school districts to gather general information about their enrollment count processes. Table I.3 shows the districts we visited with their enrollment count, counting method, and number of schools for school year 1996-97. We did not visit schools or interview school faculty or staff in these other districts. Critics have charged that DCPS’ reported enrollment numbers are overstated. Questions raised about the credibility of DCPS’ enrollment count have led to a series of reviews and audits. This appendix discusses in detail these efforts, which varied in scope and involved the efforts of several organizations. Table II.1 summarizes these efforts. In 1995, the Grier Partnership, as part of a study commissioned by DCPS, asserted that results of the 1990 U.S. census suggested that the District’s total school-age population in 1990 might have been as much as 13,000 less than DCPS reported in its official enrollment count. Grier also expressed concern about the apparent relative stability of DCPS’ official enrollment count in the face of the District’s declining resident population. Limitations to the methodology the Grier Partnership used, however, may have caused the apparent differences to be overstated. For example, Grier did not include some subgroups—preschool (Head Start), prekindergarten, and kindergarten students—that DCPS routinely includes in its official count. Even if these groups had been included in the estimates, using census data to estimate public school enrollment can be problematic. For example, the Census Bureau reports that estimates generated from its official files undercount some groups. From the 1990 census, the largest group undercounted was “renters.” Census estimates of pre-primary students enrolled in school are also understated because parents reporting the number of students enrolled in “regular school” often fail to include their pre-primary children. Finally, declines in residency do not necessarily mean declines in school enrollment. Census currently projects a loss of 31,000 in the District’s population over the next 5 years, while projecting an increase in the number of school-aged children. The first of several independent audits took place following the September 29, 1994, enrollment count. At that time, DCPS organized an internal audit and validation of the count. DCPS randomly selected a sample of students and focused on validating these students’ actual attendance in schools before the enrollment count. We were asked to observe DCPS’ internal audit effort. We questioned the reliability of the student database, finding that the database used to enroll and track students—the Student Information Membership System (SIMS)—included students who had not enrolled before the official enrollment count. We also found that transfer students were never removed from SIMS when they transferred. In addition, SIMS had other errors, was not regularly updated, and had at least 340 duplicate student records. We also criticized DCPS’ inability to identify nonresident students and the absence of procedures to validate residency. DCPS estimated that at that time approximately 2 percent of its students were probably undetected nonresidents. DCPS also estimated that this equaled more than $6 million in lost tuition revenues. We consequently recommended that DCPS periodically check SIMS for duplicates and errors, particularly before the official enrollment count, and update it regularly to reflect the changes in the enrollment status of DCPS students. We also recommended that DCPS develop systematic procedures at the school level to verify student residency and that schools refer names of nonresident students to DCPS administration for enforcement and collection of nonresident tuition. The DCPS Superintendent, after the October 1995 enrollment count, contracted for an independent audit and validation of the count. In addition to a 100-percent validation of the count, DCPS expected that the independent auditor would assess the accuracy of DCPS’ Student Information System (SIS) and determine if school and headquarters staff had followed DCPS’ policies and procedures. The independent auditor chosen by DCPS conducted a full validation of the enrollment count and examined SIS for duplicates and errors. The auditor failed, however, to determine if DCPS school and headquarters staff consistently implemented the policies and procedures developed by the DCPS administration. The independent auditor found several weaknesses in the October 1995 count, including problems with the way the enrollment count was taken and documented by DCPS staff; lack of residency documentation and validation; the questionable accuracy of SIS; and the lack of guidance for withdrawing students and excluding them from the schools’ rolls. For example, a new form, the Student Residency and Data Verification Form, used to document residency, was piloted in some schools during school year 1995-96. The auditor found that these forms were sent home to parents but were not always returned to the schools, and the forms were not reconciled to student enrollment reports to determine the number of missing forms. The auditor also found 550 sets of students with the same name and date of birth, that is, duplicate entries in SIS. In addition, the auditor criticized the time lapse—about 4 months—from the October 5, 1995, enrollment count to the audit. This meant that the auditor could not validate the enrollment of some students—students who were no longer in school at the time of the audit and for whom the school could provide no documentation demonstrating attendance before the count. To remedy the problem with duplicate database entries, the auditor recommended that DCPS periodically search the database for duplicates and errors before the enrollment count. Because of differences found in SIS and the manually prepared enrollment count report, the auditor also recommended that these two data sources be reconciled periodically to help update SIS. Regarding timing of the audit, the auditor recommended that the audit of the official enrollment count take place closer to the date of the count. And, to facilitate future audits, the auditor suggested that documentation exist to support a student’s attendance in school before the enrollment count. The independent auditor also suggested that after an enrollment count is taken, the staff responsible for monitoring attendance problems have the opportunity to review the enrollment count so they can remove from the count those students who have not attended at least 1 day of school or who have withdrawn from DCPS. The District of Columbia Auditor, in its audit of the October 5, 1995, enrollment count, found that DCPS needed significantly improved procedures for student enrollment counts to ensure more reliable and valid counts. The Auditor’s office expressed concerns about the security and reliability of SIS, the absence of any penalty for providing false enrollment information, and the lack of oversight or controls to ensure the accuracy of the information reported on the enrollment count. In addition, the Auditor found that SIS was not updated regularly to reflect changes in the enrollment status of students, particularly before the official enrollment count. The Auditor also discussed the weak controls in place to detect nonresidency and the weak procedures to collect nonresident tuition. The Auditor found that DCPS did not maintain records on the number of Student Residence and Data Verification Forms completed and returned by students’ parents, and it did not test the information on these forms or the documents provided to support the forms. As a result, the Auditor reported that according to the DCPS Nonresident Tuition Enforcement Branch estimates, about 4,000 to 6,000 DCPS students were nonresidents but did not pay nonresident tuition. Consequently, the Auditor recommended that each local school periodically reconcile SIS-generated reports with the attendance records it maintains. This would allow for adjustments to SIS to include those students who have physically presented themselves in class and removing those who have not presented themselves, withdrawn, or transferred. In addition, the Auditor suggested that unless students could document their residency, including proof of residency, they should be excluded from the official enrollment count. Furthermore, the Auditor suggested that those nonresidents who pay tuition be excluded from the enrollment count. In addition to those named above, the following individuals made important contributions to this report: Christine McGagh led numerous site visits, reviewed DCPS’ enrollment count process, and cowrote portions of this report; James W. Hansbury, Jr., performed numerous site visits, reviewed prior audit reports, and summarized those audits. Wayne Dow, Edward Tuchman, and Deborah Edwards assisted with the visits to the schools; Sylvia Shanks and Robert Crystal provided legal assistance; and Liz Williams and Ann McDermott assisted with report preparation. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined the enrollment count process that the District of Columbia Public Schools (DCPS) used in school year 1996-97, focusing on: (1) whether the process appeared sufficient to produce an accurate count; (2) enrollment count processes used by some other urban school systems; and (3) the role of the Department of Education's Inspector General in preparing DCPS' official enrollment count for school year 1996-97. GAO noted that: (1) even though DCPS changed parts of its enrollment count process in school year 1996-97 to address criticisms, the process remains flawed; (2) some of these changes increased complexity and work effort but did little to improve the count's credibility; (3) errors remained in the Student Information System (SIS), but DCPS had only limited mechanisms for correcting these errors; (4) problems also persisted in the critical area of residency verification; (5) in school year 1996-97, schools did not always verify student residency as required by DCPS' own procedures; (6) proofs of residency, when actually obtained, often fell short of DCPS' standards; (7) Central Office staff did not consistently track failures to verify residency; (8) school staff and parents rarely suffered sanctions for failure to comply with the residency verification requirements; (9) the pupil accounting system failed to adequately track students; (10) SIS allowed more than one school to count a single student when the student transferred from one school to another; (11) schools did not always follow attendance rules, and SIS lacked the capability to track implementation of the rules; (12) some attendance rules, if implemented, could have allowed counting of nonattending students; (13) other school districts report that they use several approaches to control errors and to increase the accuracy of their enrollment counts; (14) these include using centralized enrollment and pupil accounting centers and a variety of automated student information system edits and procedures designed to prevent or disallow pupil accounting errors before they occur; (15) the recently enacted District of Columbia School Reform Act of 1995 requires the enrollment count process to produce enrollment numbers for nonresidents and students with special needs; (16) DCPS (acting on behalf of the District of Columbia Board of Education) and the District of Columbia Financial Responsibility and Management Assistance Authority are not in compliance with requirements of this new law; (17) the Department of Education helped DCPS develop its request for proposals for the independent audit of the enrollment count for school year 1996-97, but it had no role in preparing DCPS' official enrollment count for school year 1996-97; and (18) the Authority subsequently decided, however, that auditing the count for school year 1996-97 would be counterproductive.
We found that several states have divested or frozen assets primarily related to Sudan and that the value of U.S. investment companies’ Sudan- related asset holdings has declined considerably since March 2007. Our survey responses show that state fund managers have divested or frozen about $3.5 billion in assets primarily related to Sudan (see table 1). Specifically, fund managers from 23 of the states responding to our survey reported that, from 2006 to January 2010, they divested or froze about $3.5 billion in assets held in 67 operating companies they identified as related either to Sudan specifically or to a larger category of divestment targets, such as state sponsors of terrorism. All of the states that reported having divested or frozen Sudan-related assets had laws or policies regarding their Sudan-related assets, and the state fund managers who responded to our survey cited compliance with these laws and policies as their primary reason for divestment. Thirty-five U.S. states have enacted legislation, adopted policies, or both, affecting their Sudan-related investments. These 35 states did so often out of concern for the genocide in Darfur, as well as some concerns about terrorism. Their laws and policies vary in the specificity with which they address the sale and purchase of Sudan-related assets. For example, most states with laws and policies requiring divestment also prohibit or restrict future investments in Sudan-related companies. However, some laws and policies only mention prohibiting future investments but do not require divestment of Sudan-related investments held prior to enactment of the measures. In addition to divestment, many state laws and policies also mandate or encourage engagement—identifying companies and leveraging power as a shareholder or potential shareholder in an effort to change the investment or operating behavior of that company. Like the states, U.S.-based investment companies have sold Sudan-related shares. Specifically, our analysis shows that the value of U.S. holdings in six key foreign companies with Sudan-related business operations fell from $14.4 billion at the end of March 2007 to $5.9 billion at the end of December 2009, a decline of nearly 60 percent. This decline cannot be accounted for solely by changes in share price, indicating that U.S. investors, on net, chose to sell shares of these companies. Based on a price index weighted to the U.S. portfolio of Sudan-related equities, prices rose by roughly 7 percent from March 2007 to December 2009, while equity holdings fell by nearly 60 percent (see fig. 1). This suggests that net selling of Sudan-related equities explains the majority of the decline in U.S. holdings. It is not certain if this selling is related to conditions specific to Sudan or represents a more general reallocation of assets by U.S. investors. Nevertheless, some evidence suggests that Sudan-specific factors may have influenced investors’ decisions to sell. Specifically, from December 2007 to December 2008, U.S. holdings in Sudan-related equities declined as a percentage of foreign oil and gas equity holdings and as a percentage of all foreign equity holdings. Investors said they weighed various factors in their decisions regarding Sudan-related assets. Most commonly, investors stated that they bought and sold Sudan-related assets for normal business reasons, such as maximizing shareholder value consistent with the guidelines in each fund’s prospectus, as well as in response to specific client instructions. Each of the investment companies we interviewed issued a corporate statement regarding Sudan-related investing, and these corporate statements reflect a variety of investor perspectives. For example, one firm’s statement indicated that it would ensure that its funds did not invest in companies materially involved in Sudan, while another’s explained that it would remain invested in these companies in order to actively oppose their practices that it did not condone. We found that U.S. investors have often considered three factors when determining whether and how to divest from companies tied to Sudan: fiduciary responsibility, the difficulty identifying operating companies with ties to Sudan, and the possible effects of divestment on operating companies and the Sudanese people. Both state fund managers and private investment companies we contacted told us that they consider whether a decision to divest Sudan-related assets is consistent with fiduciary responsibility—generally the duty to act solely and prudently in the best interests of the client. Representatives from organizations that advocate for the interests of state fund managers told us that fiduciary duty could be a disincentive to divesting, depending on how each individual state’s law is written. For instance, they expressed concerns that if the laws place emphasis on maximizing returns first and on divesting as a second priority, then fiduciary responsibility can be a disincentive to divesting. While some states make no explicit mention of fiduciary responsibility in their divestment policies and laws, some state constitutions emphasize its priority above all other responsibilities. Many state laws allow fund managers to stop divesting or to reinvest if there is a drop in the fund’s value. In addition, while most of the 35 states’ Sudan- related measures generally require divestment of Sudan-related assets consistent with the investing authority’s fiduciary responsibilities, laws and policies in six states include clauses explicitly stating that the investing authority should only divest if doing so will not constitute a breach of fiduciary trust. Our survey results demonstrate that state fund managers, when expressing concerns about fiduciary responsibility, focused on the impact that divestment might have on a fund’s returns and administrative costs. Specifically, 17 of the 29 fund managers (or 59 percent) who had divested or frozen their Sudan-related assets, or planned to do so, said they were concerned to a moderate or large extent that it would be difficult to divest while ensuring that fiduciary trust requirements were not breached, and their offices or states were not made vulnerable to lawsuits. This same concern was also cited as a moderate to large concern for 25 of the 41 (or 61 percent) fund managers who did not divest. Survey results also showed concern among state fund managers, regardless of whether they divested, regarding the financial risk of divesting. Specifically, 20 of the 29 managers (or 69 percent) who divested or planned to divest and 18 of the 41 (or 44 percent) who did not divest were concerned to a large or moderate extent that divestment could cause their funds to incur high transaction costs, earn reduced returns on investment, or both. Private investment companies expressed differing perspectives on whether divesting from Sudan is consistent with their fiduciary responsibilities. According to investment companies whose primary goal is maximizing returns, ceasing to invest in companies with Sudan-related operations based on criteria other than financial merit is inconsistent with their fiduciary responsibilities, unless their clients established these restrictions. Some of these investors stated that limiting the number of investment opportunities based on nonfinancial criteria can result in lower investment returns. Other investment companies, particularly those identifying themselves as socially responsible, maintain that divesting from Sudan based on nonfinancial criteria is consistent with fiduciary responsibility, as long as alternative equities selected can compete on the basis of financial criteria. For these investment companies, creating financially viable investment options that respond to social concerns, such as genocide or the environment, is the primary goal. These firms expressed confidence that taking nonfinancial factors into account results in an investment product that is competitive with other investments. As of May 2010, two companies that sold their Sudan-related assets had relied upon the safe harbor provision in SADA. Most companies told us that the provision was not necessary to their decision-making regarding Sudan-related assets. Investors considering whether and how to divest from companies with ties to Sudan have faced difficulties identifying these companies. SADA requires that, before divesting from Sudan-related companies, responsible entities must use credible, publicly available information to identify which companies have prohibited business operations related to Sudan. Nongovernmental organizations and private companies have sought to create and, in some cases, sell their lists of operating companies with business ties to Sudan to the public. Our survey results indicate that state fund managers have relied heavily on these sources of information. However, our analysis of available lists indicates that they differ significantly from one another. We compared three lists of companies with business ties to Sudan and found that, of the over 250 companies identified on one or more of these lists, only 15 appeared on all three. Representatives from the organizations that created these lists told us that obtaining and evaluating information on operating companies with business ties to Sudan is difficult, and that information that comes directly from companies is particularly useful. For example, they would consider an SEC disclosure filing to be a reliable source of information. However, the federal securities laws do not require companies specifically to disclose operations in countries designated as state sponsors of terrorism. While SEC regulations require disclosure of such operations if they constitute “material information,” the meaning of “material information” is not explicitly defined by law and companies are ultimately responsible for the accuracy and adequacy of the information they disclose to investors. The SEC’s Office of Global Security Risk, created in 2004, monitors whether the documents public companies file with the SEC include disclosure of material information regarding global security risk-related issues. According to officials from this office, they focus their reviews on companies with business activities in U.S.-designated state sponsors of terrorism, including Sudan. This office has suggested to companies that any operations they have in state sponsors of terrorism might be considered material because divestment campaigns and legislation mandating divestment from Sudan indicate that investors would consider this information important in making investment decisions. However, in their correspondence with the SEC, companies have raised concerns about these instructions. For example, one energy company wrote that its business dealings in state sponsors of terrorism did not need to be further disclosed in annual reports because, while these dealings may have been of interest to certain investors, they were not material to the general investing public. The Office of Global Security Risk provides limited monitoring of companies that conduct business in the four sectors covered under SADA. For example, SEC officials told us that they have corresponded with 59 of the 74 companies that file periodic reports with the SEC, and that they have identified as having ties to Sudan. However, many of these companies operate in industries not covered under SADA, such as food services, telecommunications, and pharmaceuticals. In addition, our analysis shows that the office has only corresponded with 5 of the 15 companies that are identified in all three of the lists we analyzed and that file with the SEC. All 15 of these companies operate in the four economic sectors identified in SADA. Furthermore, the office has not always followed up with companies concerning their correspondence. For example, in December 2005, the Office of Global Security Risk asked an oil company that was reported to have possible ties to Sudan to describe all current, historical, and anticipated operations in, and contacts with, Sudan, including through subsidiaries, controlling shareholders, affiliates, joint ventures, and other direct and indirect arrangements. The company did not provide a response to the request. Four years later, the office reiterated its question to the company. SEC officials also told us that, in cases where the office determines that its comment process has not resulted in full disclosure of material operations by a company, it will refer the company to the SEC’s Division of Enforcement for possible investigation. According to these officials, the Office of Global Security Risk has referred one company to this division since the office was created in 2004. The SEC also has the discretionary authority to adopt a specific disclosure requirement for companies that trade on U.S. exchanges (such as requiring disclosure of any operations in state sponsors of terrorism). Although the SEC has not done so, it could exercise this authority by issuing an interim rule for comment and a final rule in the Federal Register. However, the agency has indicated that it is committed to the practice of relying on companies to ensure that their disclosures contain all material information about their operations in these countries. Some companies that have ceased operating in Sudan warned of a negative effect on the Sudanese people. For example, one company we spoke with told us that when it decided to leave Sudan and sell its stake in a project to another company, that company refused to sign the sales agreement until language conferring responsibility for continuing the seller’s humanitarian programs was removed from the agreement. Another company that left the Sudanese market stated that it had been involved in a nationwide anti-AIDS program in Sudan, which it could no longer participate in after leaving Sudan. Because of concerns about these possible negative effects, some investors have shifted their approach toward engaging with companies in order to leverage their resources as shareholders to influence companies’ behavior and promote efforts aimed at improving the lives of the Sudanese people. Some advocacy groups that were originally at the forefront of the divestment campaign also have shifted their focus toward engagement. One advocacy group we spoke with stated that it believed that divestment was too blunt of an approach because it targeted a wide array of companies, some of which may not have had material operations in Sudan. Instead, this group argued for an approach that targets companies involved in the industries that are most lucrative for the Sudanese government and that provides alternatives to divestment, such as engaging companies to try to influence their behavior. Like advocacy groups, some U.S. investment companies have also embraced the idea of engagement, and increasingly view divestment as a last resort because engagement allows companies to continue operating and provides positive incentives for them to use their resources to help the Sudanese people. U.S. states have also endorsed engagement as a viable alternative to divestment, with a few states identifying divestment only as a last resort. Nineteen of the 25 states whose laws or policies require divestment also encourage or require engagement. The eight foreign operating companies we spoke with generally agreed that, for them, engagement is preferable to divestment because it allows them to continue operating in Sudan and to discuss possible ways to improve the situation there. These companies consistently told us that they believe their business operations positively impact the Sudanese people. For example, a mining company told us that it built seven schools and a medical clinic, brought water and power supplies to the area around the mine, and started agricultural training programs for the local population. This company said it also convinced its business partners from the Sudanese government to contribute some of their profits from the mine to support a humanitarian organization operating in Darfur. Almost all of the companies we spoke with said they donated to or became directly involved in humanitarian projects as a direct result of their engagement with various advocacy groups and shareholders. A few of the companies we spoke with decided to limit their business activities in Sudan as a result of engagement processes. For example, one company we spoke with committed to not pursue any new business in Sudan until the situation in Darfur changes and United Nations peacekeepers are allowed in the country. The company indicated that this commitment sent a strong signal to the government of Sudan, which depends on the company to explore and identify natural resource deposits. Our analysis indicates that the U.S. government has complied with SADA’s federal contract prohibition. Specifically, we found no evidence to suggest that the U.S. government has awarded contracts to companies identified as having prohibited business operations in Sudan or has violated the Federal Acquisition Regulation (FAR) rules implementing section 6 of SADA (Prohibition on United States Government Contracts). SADA seeks to prohibit the U.S. government from contracting with companies that conduct certain business operations in Sudan. To that end, section 6 of the act requires the heads of federal agencies to ensure that each contract for the procurement of goods or services includes a clause requiring the contractor to certify that it does not conduct prohibited business operations in Sudan in the four key economic sectors. Based on our analysis of one of the most widely used lists of companies with prohibited business ties to Sudan, we found that only 1 of 88 companies identified in the list has received federal contracts since the FAR requirements implementing SADA took effect in June 2008. However, the contract certification provision was not required for these particular contracts because they were purchase orders under simplified acquisition procedures, which generally do not require SADA certification under the FAR. In addition to the purchase orders with this company, we found that from June 12, 2008 to March 1, 2010, the U.S. government awarded 756 contracts to 29 affiliates and subsidiaries of the companies identified in the list as having prohibited business ties to Sudan. While SADA aims to prevent companies with prohibited business operations in Sudan from receiving federal contracts, it does not restrict federal contracting with these companies’ affiliates and subsidiaries, provided that the affiliates and subsidiaries certify that they do not have prohibited business operations in Sudan. Some advocacy groups have disagreed with the FAR councils’ decision to apply the requirement only to the entity directly contracting with the government because it allows companies that have certified to the federal government that they do not conduct prohibited business operations to continue operating in Sudan through their subsidiaries or affiliates. The FAR councils, however, stated that expanding the scope of the rule to include subsidiaries and affiliates would require the parties seeking federal contracts to attest to the business operations of parent companies, subsidiaries, and other affiliates about which they may not have information. In addition, the FAR councils noted that the company may not have any influence over the affairs of its related companies. Our review of a nonrandom selection of contracts awarded to these affiliates and subsidiaries indicates that the contractors provided the necessary certification, when required. Therefore, for these specific contracts, the U.S. government has complied with the contract prohibition section of SADA. We also found that the U.S. government has not granted any waivers pursuant to SADA, as allowed under the act, or determined that any companies submitted false certifications under SADA. As global awareness of the genocide in Darfur has grown, so too have efforts to combat this humanitarian crisis. Divestment from Sudan has been at the forefront of these efforts. However, in deciding whether and how to divest, stakeholders must consider how divestment affects foreign companies operating in Sudan, particularly those that strive to make a positive contribution to the Sudanese people. They must also ensure that divestment is consistent with their fiduciary responsibility. Additionally, they must identify and evaluate conflicting sources of information about which companies have Sudan-related business operations. Requiring companies to disclose their own operations in Sudan (as well as other state sponsors of terrorism) would provide more accurate and transparent information to investors carefully weighing whether and how to divest from Sudan. Furthermore, the strong demand for this information from states that require divestment, as well as from other investors, indicates that this information could be considered material—a judgment that the SEC has suggested in its correspondence with operating companies. In our report released today, we recommend that, in order to enhance the investing public’s access to information needed to make well-informed decisions when determining whether and how to divest Sudan-related assets, the SEC consider issuing a rule requiring companies that trade on U.S. exchanges to disclose their business operations related to Sudan, as well as possibly other U.S.-designated state sponsors of terrorism. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee may have. For questions or further information about this testimony, please contact Thomas Melito at (202) 512-9601, or melitot@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony include Cheryl Goodman, Assistant Director; Elizabeth Singer; Kay Halpern; Katy Forsyth; Michael Hoffman; R.G. Steinman; Julia Becker Vieweg; Sada Aksartova; Debbie Chung; JoAnna Berry; Noah Bleicher; Martin de Alteriis; Patrick Dynes; Justin Fisher; Cathy Hurley; Ernie Jackson; Debra Johnson; Julia Kennon; Jill Lacey; and Linda Rego. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Recognizing the humanitarian crisis in Darfur, Sudan, Congress enacted the Sudan Accountability and Divestment Act (SADA) in 2007. This law supports U.S. states' and investment companies' decisions to divest from companies with certain business ties to Sudan. It also seeks to prohibit federal contracting with these companies. This testimony (1) identifies actions that U.S. state fund managers and investment companies took regarding Sudan-related assets, (2) describes the factors that these entities considered in determining whether and how to divest, and (3) determines whether the U.S. government has contracted with companies identified as having certain Sudan-related business operations and assesses compliance with SADA's federal contract prohibition provision. This testimony is based on a GAO report (GAO-10-742), for which GAO surveyed states, analyzed investment data, assessed federal contracts, and interviewed government officials. Since 2006, U.S. state treasurers and public pension fund managers have divested or frozen about $3.5 billion in assets primarily related to Sudan in response to their states' laws and policies; U.S. investment companies, which also sold Sudan-related assets, most commonly cited normal business reasons for changes in their holdings. State fund managers GAO surveyed indicated that their primary reason for divesting or freezing Sudan-related assets was to comply with their states' laws or policies. Thirty-five U.S. states have enacted legislation or adopted policies affecting their investments related to Sudan, primarily in response to the Darfur crisis and Sudan's designation by the U.S. government as a state sponsor of terrorism. GAO also found that the value of U.S. shares invested in six key foreign companies with Sudan-related business operations declined by almost 60 percent from March 2007 to December 2009. The decline cannot be accounted for solely by lower stock prices for these companies, indicating that U.S. investors, on net, decided to sell shares in these companies. Investors indicated that they bought and sold Sudan-related assets for normal business reasons, such as maximizing shareholder value. U.S. states and investment companies have often considered three factors when determining whether and how to divest. First, they have considered whether divesting from Sudan is consistent with fiduciary responsibility--generally the duty to act solely and prudently in the interest of a beneficiary or plan participant. Second, they have considered the difficulty in identifying authoritative and consistent information about companies with Sudan-related business operations. GAO analyzed three available lists of these companies and found that they differed significantly from one another. Although information directly provided by companies through public documents, such as Securities and Exchange Commission (SEC) disclosures, is a particularly reliable source of information, federal securities laws do not require companies specifically to disclose business operations in state sponsors of terrorism. The SEC has the discretionary authority to adopt a specific disclosure requirement for this information but has not exercised this authority. Third, investors have considered the effect that divestment might have on operating companies with Sudan-related business activities, such as prompting companies interested in promoting social responsibility to leave Sudan, creating room for companies that do not share that interest to enter the Sudanese market. GAO's analysis, including a review of a nonrandom selection of contracts, indicates that the U.S. government has complied with SADA's contract prohibition provision. Specifically, the U.S. government has contracted with only one company identified on a widely used list of companies with business ties to Sudan, and the contracts awarded to this company did not violate SADA. The U.S. government has contracted with subsidiaries and affiliates of companies with business ties to Sudan, as SADA permits. The related GAO report recommends that the SEC consider issuing a rule requiring companies that trade on U.S. exchanges to disclose their business operations tied to Sudan, as well as possibly other state sponsors of terrorism. The SEC's Division of Corporation Finance agreed to present GAO's recommendation to the commission.
Beginning in the 1930s, a number of federal housing programs have provided assistance to low-income renters and homeowners, including rent subsidies, mortgage insurance, and loans and grants for the purchase or repair of homes. Housing developments can be assisted by multiple programs. For example, a loan or mortgage on a multifamily property may be insured through a HUD or USDA program, and the property may have tenants that receive rental assistance from these agencies. In our earlier report, we identified a total of 23 federal housing programs that target or have special features for the elderly. Of these programs, 2 are intended for the elderly only, 3 target the elderly and disabled, and another 18 have special features for the elderly, such as income adjustments that lower elderly households’ rental payments. Appendix I lists these housing assistance programs. In general, both HUD and USDA programs target families at lower income levels. HUD programs target families with incomes that are extremely low (no more than 30 percent of an area’s median), very low (no more than 50 percent of an area’s median), and low (no more than 80 percent of an area’s median). USDA programs also target families with incomes that are very low and low. In addition, some USDA programs target families with moderate incomes (no more than 115 percent of an area’s median). However, these programs do not reach all needy households, and waiting lists for many types of subsidized housing, including housing for the elderly, are often long. HUD has specific goals for increasing housing opportunities for the elderly, including one goal specifically related to supportive services. As outlined in its fiscal year 2004 Annual Performance Plan, these goals include (1) increasing the availability of affordable housing for the elderly, (2) increasing the number of assisted-living units, (3) increasing the number of elderly households living in privately owned, federally assisted multifamily housing served by a service coordinator, and (4) increasing elderly families’ satisfaction with their Section 202 units. USDA does not have specific goals related to the elderly in its fiscal year 2004 Annual Performance Plan. As GAO has previously reported, virtually all the results that the federal government strives to achieve require the concerted and coordinated efforts of two or more agencies. This shared responsibility is an outgrowth of several factors, including the piecemeal evolution of federal programs and service delivery efforts. Achieving results on public problems, such as the potentially large service needs of a growing elderly population, increasingly calls for effective interagency coordination. However, our work has shown that a number of barriers inhibit coordination among agencies. For example: In reporting on the coordination of programs for the homeless, we noted that the federal government’s system for providing assistance to low- income people is highly fragmented. Each federal assistance program usually has its own eligibility criteria, application, documentation requirements, and time frames; moreover, applicants may need to travel to many locations and interact with many caseworkers to receive assistance. A review of federally assisted transportation services for “transportation- disadvantaged” seniors (who are more likely to have difficulty accessing transportation due to physical ailments) found that 5 federal agencies administer 15 programs. Service providers told GAO that certain characteristics of federal programs, such as what the providers view as burdensome reporting requirements and limited program guidance, can impede the implementation of practices that enhance senior mobility. More generally, we have noted the range of barriers to coordination that agencies often face, including missions that are not mutually reinforcing or that may even conflict; concerns about protecting jurisdiction over missions and control over resources; and incompatible procedures, processes, data, and computer systems. Generally, HUD and USDA’s housing assistance programs are not required to provide supportive services to the elderly. Of the 23 housing assistance programs that target or include the elderly among potential beneficiaries, only 4 require the owners of properties developed under the programs to ensure that supportive services are available. Appendix II provides summaries of the four programs, which include: HUD’s Section 202 program, which subsidizes the development and operating costs of multifamily properties for elderly households with very low incomes. It is the only federal housing program that targets all of its rental units to very-low-income elderly households. Applicants for Section 202 funding must demonstrate that services will be available at the development or in the community where new construction is proposed. HUD’s Assisted Living Conversion Program, which provides private nonprofit owners of eligible properties with grants to convert some or all of their units into assisted living facilities for the frail elderly. The reconfigured facilities must include enough community space to accommodate a central kitchen or dining area, lounges, and recreation and other multiple-use areas. The facilities must provide supportive services such as personal care, transportation, meals, housekeeping, and laundry. HUD’s Section 232 Mortgage Insurance Program, which provides mortgage insurance for the construction or substantial rehabilitation of nursing homes (facilities that provide skilled nursing care and have 20 or more beds); intermediate care facilities (those that provide minimum but continuous care and have 20 or more beds); board and care homes (facilities that provide room, board, and continuous protective oversight and have at least 5 accommodations); and assisted living facilities (those with 5 or more units designed for frail elderly persons who need assistance with at least 3 activities of daily living). All insured facilities must provide supportive services, but these services vary according to the type of facility. USDA’s Section 515 Program, which provides loans to construct or to purchase and substantially rehabilitate multifamily rental or cooperative housing and recreational facilities in rural communities. Tenants eligible to live in program properties may also receive rental assistance through HUD or USDA programs. The Congregate Housing subprogram funds the development of assisted, group living environments that must provide meals, transportation, housekeeping, personal services, and recreational and social activities. Generally, HUD and USDA do not provide funding for the services required under these housing programs. The property owners typically obtain other funds, either from federal programs, local charities, and civic groups to provide supportive services or must ensure that appropriate services are available in the community. HUD administers four service-related programs that can be used in conjunction with subsidized housing programs: two programs that provide supportive services to residents of public and multifamily properties developed under HUD programs, and two that link residents to supportive services. None of these programs are targeted exclusively to the elderly, but they either can be used in properties designated for the elderly or offer funding specifically for services for the elderly. The Congregate Housing Services Program provides grants for the delivery of meals and nonmedical supportive services to elderly and disabled residents of public and multifamily housing, including USDA’s Section 515 housing. While HUD provides up to 40 percent of the cost of supportive services, grantees must pay at least 50 percent of the costs, and program participants pay fees to cover at least 10 percent. Like the Elderly/Disabled Services Coordinator Program under ROSS, the Congregate Housing Services Program has provided no new grants since 1995, but Congress has provided funds to extend expiring grants on an annual basis. The Neighborhood Networks program encourages property owners, managers, and residents of HUD-insured and -assisted housing to develop computer centers. Although computer accessibility is not a traditional supportive service for the elderly, a senior HUD official noted that having computers available enhances elderly residents’ quality of life. HUD does not fund each center’s planned costs but encourages property owners to seek cash grants, in-kind support, and donations from sources such as state and local governments, educational institutions, private foundations, and corporations. The ROSS grant program links public housing residents with appropriate services. This program differs from the Service Coordinator Program in that it is designed specifically for public housing residents. The ROSS program has five funding categories, including the Resident Service Delivery Models for the Elderly and Persons with Disabilities (Resident Services) and the Elderly/Disabled Service Coordinator Program. Resident Services funds can be used to hire a project coordinator; assess residents’ needs for supportive services and link residents to federal, state, and local assistance programs; provide wellness programs; and coordinate and set up meal and transportation services. The Elderly/Disabled Service Coordinator Program has not provided new grants since 1995 but still services existing grants. The Service Coordinator Program provides funding for managers of multifamily properties designated for the elderly and disabled to hire coordinators to assist residents in obtaining supportive services from community agencies. These services, which may include personal assistance, transportation, counseling, meal delivery, and health care, are intended to help the elderly live independently and to prevent premature and inappropriate institutionalization. Service coordinators can be funded through competitive grant funds, residual receipts (excess income from a property), or rent increases. According to HUD’s fiscal year 2003 Performance and Accountability Report, service coordinators were serving more than 111,000 units in elderly properties. Elderly residents of public and federally subsidized multifamily housing can also receive supportive services through partnerships between property owners and local organizations and through programs provided by HHS. For example, property owners can establish relationships with local nonprofit organizations, including churches, to ensure that residents have access to the services that they need. At their discretion, property owners may establish relationships that give the elderly access to meals, transportation, and housekeeping and personal care services. Although GAO did not obtain data on the extent to which such services are made available at all public and federally subsidized multifamily housing, in site visits to HUD and USDA multifamily properties, we found several examples of such partnerships: In Greensboro, North Carolina, Dolan Manor—a Section 202 housing development—has established a relationship with a volunteer group from a local church. The volunteer group provides a variety of services such as transportation for the residents. In Plain City, Ohio, residents of a Section 515 property called Pleasant Valley Garden receive meals five times a week in the community’s senior center (a $2 donation is suggested). A local hospital donates the food and a nursing home facility prepares it. Volunteers, including residents, serve the meals. The senior center uses the funds collected from the lunch for its activities. In addition, local grocery stores donate bread products to the senior center daily. The United Way provides most of the funding for the senior center. In Guthrie, Oklahoma, Guthrie Properties—also a Section 515 property— has established a relationship with the local Area Agency on Aging. The agency assists residents of Guthrie Properties in obtaining a variety of services, including meals and transportation to a senior center. Some elderly residents of public and federally subsidized housing may also obtain health-related services through programs run by HHS. For example, HHS’s Public Housing Primary Care Program provides public housing residents with access to affordable comprehensive primary and preventive health care through clinics that are located either within public housing properties or in immediately accessible locations. The program awards grants to public and nonprofit private entities to establish the clinics. The organizations must work with public housing authorities to obtain the physical space for the clinics and to establish relationships with residents. Currently, there are 35 grantees, 3 of which are in rural areas. According to a program administrator, although clinics are not specifically geared toward public housing designated for the elderly, they can be established at such properties. Elderly residents of federally subsidized housing may also be eligible for the Medicaid Home and Community-Based Services (HCBS) Waiver Program, which is administered by HHS’s Centers for Medicare and Medicaid Services. Through this waiver program, individuals eligible for Medicaid can receive needed health care without having to live in an institutional setting. HUD has identified these waivers as an innovative model for assisting the frail elderly in public housing. In addition, eligible elderly residents of federally subsidized housing may receive health care through the Program of All-Inclusive Care for the Elderly (PACE), which is also administered by the Centers for Medicare and Medicaid Services. Like the HCBS waiver program, this program enables eligible elderly individuals to obtain needed services without having to live in an institutional setting. The program integrates Medicare and Medicaid financing to provide comprehensive, coordinated care to older adults eligible for nursing homes. Figure 1 provides information on the housing assistance programs that can use federally funded supportive services programs that assist the elderly. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions at this time. For further information on this testimony, please contact David G. Wood at (202) 512-8678. Individuals making key contributions to this testimony included Emily Chalmers, Natasha Ewing, Alison Martin, John McGrail, Marc Molino, Lisa Moore, John Mingus, Paul Schmidt, and Julianne Stephens. USDA Section 502 Rural Housing Loans (Direct) Section 502 Direct Housing Natural Disaster Loans Section 502 Guaranteed Rural Housing Loans Section 504 Rural Housing Repair and Rehabilitation Loans Section 515 Rural Rental Housing Loans Section 521 Rural Rental Assistance Section 538 Guaranteed Rural Rental Housing Loans Project-based Rental Assistance (Section 8 and Rent Supplement) (inactive) Section 8 Moderate Rehabilitation (inactive) Section 207 Mortgage Insurance for Manufactured Home Parks Section 207/223(f) Mortgage Insurance for Existing Multifamily Properties Section 213 Mortgage Insurance for Cooperatives Section 221(d)(3) Below-Market Interest Rate (inactive) Section 221(d)(3)/(d)(4) Mortgage Insurance Section 236 Mortgage Insurance and Interest Reduction Payments (inactive) Before fiscal year 1992, the Section 202 program also supported the development of housing for the disabled. The Section 515 program’s Congregate Housing subprogram requires properties to provide supportive services.
According to a congressionally established bipartisan commission, decreased investment in affordable housing and an elderly population that is projected to grow from about 12 percent of the population in 2002 to 20 percent by 2030 are likely to increase the number of elderly who must spend large portions of their incomes on housing. Moreover, according to this commission, more than one-third of the elderly tenants of government-subsidized housing require assistance with some type of activity of daily living, such as making a meal or getting in and out of bed. This testimony, which is based on a report issued in February 2005, discusses (1) the federal housing assistance programs requiring that supportive services be made available to elderly residents, (2) other Department of Housing and Urban Development (HUD) programs that assist the elderly in obtaining supportive services, and (3) private partnerships and federal health care programs that may provide supportive services to elderly beneficiaries of federal housing assistance. Of the 23 housing assistance programs GAO reviewed, only 4 require the owners of participating properties to ensure that services such as meals or transportation are available to residents. Three are HUD programs: the Section 202 Supportive Housing for the Elderly Program, which subsidizes multifamily properties for elderly households with very low incomes; the Assisted Living Conversion Program, which subsidizes the conversion of HUD-subsidized multifamily properties into assisted living facilities; and the Section 232 Mortgage Insurance Program, which insures mortgages for licensed facilities that provide varying levels of skilled care and services. USDA's Section 515 Rural Rental Housing Loan program, which makes loans for the construction and rehabilitation of rural multifamily properties, has a Congregate Housing Services subprogram that requires the provision of supportive services. HUD administers four programs that can be used with various housing programs to help the elderly with supportive services: Congregate Housing Services Program, which provides grants for the delivery of meals and nonmedical supportive services to elderly and disabled residents of public and multifamily housing; Neighborhood Networks Program, which encourages the development of computer centers in HUD-supported housing; Resident Opportunities and Self Sufficiency (ROSS) Program, which links public housing residents with services; and Service Coordinator Program, which funds coordinators who help elderly residents access services such as transportation and health care at some multifamily properties. Supportive services may also be available to elderly residents of subsidized housing through partnerships between individual properties and local organizations and through Department of Health and Human Services (HHS) programs. For example, HHS's Public Housing Primary Care Program provides public housing residents with access to affordable primary and preventive health care through clinics that are located in or near the properties. GAO did not obtain data on the extent to which such services are made available.
DHS and DOJ have several components with law enforcement functions whose personnel are authorized to carry firearms in support of accomplishing their respective missions. Table 1 describes the various law enforcement and homeland security missions of the DHS and DOJ components within our review, as well as the number of personnel authorized to carry firearms in fiscal year 2013. In support of their law enforcement missions, both DHS and DOJ law enforcement officers and agents use a number of different types of firearms, which require a variety of ammunition. Examples of firearms include side arms, such as pistols, and long guns, such as rifles. Commonly used ammunition includes .40 caliber, .223 caliber, and 9 millimeter, according to our analysis of data provided by DHS. More examples of commonly used firearms and ammunition can be found in appendix II. In 2003, DHS began its strategic sourcing program to leverage its buying power and secure competitive prices for a variety of goods and services, resulting in cost savings through collective procurement actions (e.g., buying in bulk quantities at lower prices). DHS developed its first strategically sourced ammunition procurement in 2005. DHS’s strategic sourcing contract vehicles include contracts or agreements that have been established for use by two or more components, and these types of contracts have been used by DHS components to procure ammunition. The Office of Management and Budget’s Office of Federal Procurement Policy has cited DHS’s efforts among best practices for implementing federal strategic sourcing initiatives, and we have also reported on DHS’s strategic sourcing initiatives. DHS procures ammunition using two types of contracts—strategic sourcing and individual contracts. The decision of whether to use a strategic sourcing contract is driven by collective component needs, according to DHS officials. That is, if more than one component needs a specific type of ammunition, then that ammunition procurement is a candidate for strategic sourcing. Most of DHS’s ammunition contracts, whether strategically sourced or individual contracts, are indefinite delivery, indefinite quantity (IDIQ) contracts, which are typically negotiated for a base year with additional options for purchasing ammunition up to a certain maximum number of rounds, or contract ceiling. These IDIQ contracts allow components to lock in the price, specifications, delivery costs, and other requirements and then place purchase orders throughout the negotiated time frame of the contract (e.g., indefinite delivery) for varying quantities as needed (e.g., indefinite quantity), rather than placing a single order for large amounts of ammunition. DHS orders based on the contracts as needed and pays for the ammunition on delivery. DHS is required to buy only a minimum that represents about 1 month of the projected DHS requirement, according to DHS officials. In August 2012, DHS began requiring components to use strategic sourcing contract vehicles for procurements, including ammunition, unless procurements met certain exceptions, such as specialized types of ammunition not commonly used across components or ammunition requiring certain technical specifications. Ammunition that is either not commonly used or has specifications needed by a single component can be acquired through individual component contracts according to DHS officials. Within the DHS Office of the Chief Procurement Officer, the Strategic Sourcing Program Office helps components develop, implement, and maintain sourcing strategies to enhance acquisition efficiency. According to DHS officials, the strategic sourcing process for procuring ammunition for multiple components has saved an estimated $2 million since fiscal year 2008. See appendix III for a complete list of all active DHS ammunition contracts, as of October 2013. Given the number of law enforcement and security personnel across the department, DHS established the Weapons and Ammunition Commodity Council (WACC) in October 2003 to consolidate weapons, ammunition, and other enforcement equipment requirements. While DHS components are responsible for determining their own ammunition requirements and needs, WACC is to serve as the coordinating mechanism for DHS components’ ammunition and weapons procurements. Rather than have each component procure ammunition and weapons individually or draw exclusively on its own history for best practices, the members of WACC, as shown in figure 1, are to meet monthly to explore ways to maximize procurement savings, according to DHS officials. According to DHS officials, to further coordinate ammunition and weapons procurements, DHS is exploring the feasibility of broadening the scope of WACC to also track and approve ammunition purchases across the department. As of October 2013, DHS said these plans are in the development stage. DHS’s Office of the Chief Readiness Support Officer, within the Management Directorate, is responsible for department-wide asset management. Within each component, property management officers or firearm program managers are responsible for the component’s firearm program, including ammunition. In general, component firearm instructors or firearms and weapons custodians are responsible for maintaining oversight for the shipment, receipt, issuance and periodic inventory of firearms and ammunition. At the headquarters level, the DHS Assistant Deputy for Mobile Assets and Personal Property has overall responsibility for the oversight and management of DHS firearm assets. This includes ensuring that components have an approved firearm asset management system of record and documented firearm accountability policies and procedures. The Chief Readiness Support Officer disseminates DHS’s asset management program requirements, provides oversight of the program, and sets department policy. The DHS Office of the Chief Readiness Support Officer’s Sensitive Asset Manager’s responsibilities include evaluating, auditing, and assessing component-level firearm asset management systems of record and accountability programs to ensure compliance with laws, regulations, policies, and directives, and working with components to develop standard and uniform DHS-wide firearms accountability policies and guidelines. In fiscal year 2013, DHS purchased 84 million rounds of ammunition for its authorized firearm-carrying workforce, which is less than DHS’s ammunition purchases each year over the past 5 fiscal years. DHS ammunition purchases are driven primarily by the firearms training and qualification requirements for the firearm-carrying workforce, though other factors are also considered by DHS when making ammunition purchase decisions. For selected DOJ components, for fiscal years 2011 through 2013, the average number of rounds of ammunition purchased per authorized firearm-carrying personnel per year was comparable to that for DHS law enforcement components.identified similar considerations in determining their annual ammunition requirements. From fiscal year 2008 through fiscal year 2013, DHS purchased an average of 109 million rounds of ammunition per year. Yearly purchases ranged from a high of 133 million rounds in fiscal year 2009 to a low of 84 million rounds in fiscal year 2013. In comparison, the total consumer ammunition market for 2012 was approximately 9.5 billion rounds, according to the National Shooting Sports Foundation. DHS’s ammunition purchases over the 6-year period equate to an average of 1,200 rounds of ammunition purchased per agent or officer per year.The annual total cost for these ammunition purchases ranged from $19 million to $34 million per year, with an average of $29 million for the 6- year time period. In fiscal year 2013, DHS purchased 84 million rounds of ammunition for its authorized firearm-carrying workforce, equaling an average of 900 rounds per agent or officer that year, and a total annual cost of $19 million. According to senior DHS officials, the decline in ammunition purchases in fiscal year 2013 is the result of budget constraints, reduced training, and in one case the expiration of an ammunition contract. These officials said that although ammunition purchases declined in fiscal year 2013, DHS components relied on their ammunition inventories to maintain basic qualification and operational needs. In fiscal year 2014, DHS plans to purchase about 75 million rounds of ammunition (see fig. 2). According to DHS contract data as of October 1, 2013, the 29 existing DHS ammunition contracts extend over the next 4 fiscal years and have a remaining contract limit of approximately 704 million rounds (for various ammunition types), if every option for purchasing ammunition were exercised into fiscal year 2018. The total contract dollar ceiling on these 29 active contracts is about $285 million. The approximately 704 million rounds of ammunition represent the limit on the combined active contracts, against which orders from the manufacturers may be placed over the next several fiscal years. See appendix III for a complete list of all active DHS ammunition contracts, as of October 1, 2013. Although the amounts of DHS’s overall ammunition purchases have fallen since fiscal year 2009, the yearly changes in the amount of ammunition purchased by its component agencies have varied, as shown in figure 3. CBP—which has the most firearm-carrying personnel of the DHS components—purchased the largest amount of ammunition, on average, from fiscal year 2008 through 2013, accounting for approximately 46 percent of DHS’s average purchases of ammunition for this time period. The average amount of ammunition purchased by components from fiscal years 2008 through 2013 ranged from approximately 50 million rounds of ammunition for CBP to a low of approximately 1.9 million rounds of ammunition for FPS. A number of factors contribute to variation in ammunition purchases from year to year by component, such as changes in the size of the firearm- carrying workforce. For example, CBP and ICE increased their firearm- carrying workforces in fiscal years 2008 and 2009, which also increased ammunition needs for those components, as new hires needed to be trained to gain firearms proficiency prior to entering the field for duty. According to CBP data, new Border Patrol agents each use approximately 3,300 rounds during training and qualification, compared with experienced officers who might use about 600 rounds. DHS components provided data on the amount of ammunition typically used by a new law enforcement hire, which ranged from 2,000 to 5,000 rounds. From fiscal years 2008 through 2013, the DHS components in our review trained thousands of new law enforcement agents and officers, according to DHS data. Other factors that account for changes in ammunition purchases year to year include qualification and training requirements and amount of ammunition in inventory, as discussed later in this report. In addition, it is important to note that components purchase ammunition throughout the year, and orders placed in 1 fiscal year might not arrive or be used until future years, which can also contribute to variation in purchases from year to year. The amount of ammunition each component purchases for its firearm- carrying personnel also varies. We analyzed DHS data on ammunition purchases and the size of the firearm-carrying workforce for fiscal years 2008 through 2013 and found the average number of rounds of ammunition purchased per year per firearm-carrying agent or officer by component for this time period ranged between approximately 1,000 and 2,000 rounds, as shown in table 2. This variation exists because each component independently decides, based on its operational needs, how much ammunition to allocate to its firearm-carrying personnel for training and qualification each year. For example, FPS provides each officer 250 rounds per quarter per handgun for firearm qualification, while ICE provides 100 rounds per quarter per handgun for firearm qualification. When determining annual ammunition requirements, the primary consideration for DHS components is the amount of ammunition needed to support the training and qualification of the firearm-carrying workforce, according to DHS officials. Training and qualification requirements vary for the components in our review, as do the number of rounds of ammunition typically used for training and qualification purposes. However, for all the components in our review, most firearm-carrying personnel are required to qualify four times per year on their issued firearms. We analyzed available data on DHS’s ammunition requirements for the 6-year period from fiscal years 2008 through 2013. We found that the ammunition purchased by the components was reasonable given their annual expected training and operational needs. DHS’s more than 70,000 firearm-carrying personnel have qualification requirements they must fulfill to ensure firearms proficiency. Failure to qualify on firearms may result in the denial, suspension or revocation of credentials to carry firearms. Depending on how rigorous a component’s qualification requirements are and how many firearms an officer is authorized to carry, each DHS component may require a larger or smaller supply of ammunition to conduct its qualifications. Officers generally carry one or more firearms and must qualify on each firearm they carry. For example, a component may allow an officer to carry a personally owned firearm in addition to his or her duty-issued firearm, and the officer must qualify on both weapons, using ammunition provided by the relevant DHS component. DHS law enforcement officers across the department are typically required to perform quarterly firearms qualifications on their issued weapons (e.g., handgun), and in some cases demonstrate familiarity and proficiency on long guns, such as rifles and shotguns, even if they are not required to carry a long gun as part of their duties. For some officers, depending on the DHS component and the requirements of their unit, more frequent qualifications or advanced firearms training is required. For example, advanced firearms training may include “down and disabled” firearms training or reduced light firearms training, among others. DHS components also may have specialized units, such as tactical teams, that may require hundreds or thousands of rounds of ammunition during training activities. For example, CBP’s Border Patrol Tactical Unit and Special Response Team—which enhance field operations with specialized tactics and techniques—use specialized firearms that are not otherwise issued to other CBP firearm-carrying staff. Officers in these teams are required to qualify on these additional specialized weapons, and can use more ammunition during training activities, according to CBP officials. Along with the other five DHS components, FLETC’s purchases of ammunition are to support the training of law enforcement officials to help them fulfill their responsibilities safely and proficiently. From fiscal year 2008 through 2013, FLETC has trained over 398,000 federal, state, local, tribal, and international law enforcement personnel, according to FLETC data—approximately 66,000 on average per year. FLETC provides the ammunition for the training courses provided to these law enforcement officers. FLETC officials said they determine their annual ammunition requirements based on the projected number of classes, students, and ammunition needs per class.in FLETC’s ammunition needs occur throughout the year, as a result of canceled classes, added classes, and varying numbers of students. DHS components generally use the same type of ammunition for training and qualification as they use for duty (operational use). According to DHS, this is because delivering and storing different types of ammunition for training and operational use creates complex logistical challenges, and could create an officer safety issue if the wrong ammunition is used in the field. FLETC primarily uses Reduced Hazard Training Ammunition, which is free of lead and other toxic substances, in its firearms training curriculum. DHS components reported a combined total of 331 rounds of ammunition fired in the course of their duties compared with their total combined ammunition purchases of over 84 million rounds for that year. In addition to citing training and qualification requirements, senior DHS officials cited delivery and quality issues with ammunition, changes in firearms, historical usage rates, and the amount of ammunition currently in inventory as other factors they consider when making ammunition purchase decisions. Delivery and ammunition quality issues: Senior DHS component officials told us that another factor they consider when determining ammunition requirements is the time lag due to the ammunition delivery and quality testing process. Officials from all DHS components within our review reported time lags between placing orders for ammunition and receiving them. For example, the time lag can range from 3 to 18 months. Senior DHS component officials said that the Department of Defense gets first priority for ammunition orders from manufacturers, and supplying the military can delay orders for DHS and other federal law enforcement agencies, not only because of military priority, but also because production of other types of ammunition may be halted while manufacturers work to produce the ammunition needed by the Department of Defense. These officials also stated that while the ammunition they purchase comes from the same manufacturers that provide ammunition for commercial supply, the ammunition DHS purchases is manufactured according to DHS contract specifications, and is higher quality than commercial off-the-shelf ammunition. Senior component officials said they take this into account when determining how much and when to order ammunition, including when ammunition inventories need to be replenished. While awaiting delivery of ordered ammunition, DHS components reported using their existing ammunition inventories to meet mission requirements, such as for training and qualification. Ammunition quality and testing are also factors in developing annual ammunition requirements for DHS. Components, such as USSS, ICE, and FLETC, have testing facilities to help ensure the ammunition ordered meets the specifications outlined in contracts with manufacturers. For example, ICE’s National Firearms Tactical and Training Unit (NFTTU)— which purchases ammunition on behalf of ICE, CBP, and FPS, and provides technical assistance to TSA—operates a ballistics laboratory for the testing of ammunition, among other purposes. This unit conducts quality tests on a sample of each ammunition order before approving the According to acceptance and distribution of the total ammunition order.the Assistant Director of ICE’s NFTTU, if the ammunition tested does not meet NFTTU’s quality standards, the batch is returned to the manufacturer and the total order is not accepted and approved until the ammunition meets standards. According to NFTTU data, this has occurred 17 times from fiscal year 2009 through fiscal year 2013, representing 3 percent of total tested batches. Changes in firearms and ammunition types: Another factor components consider when developing their annual ammunition requirements is changes in the types of firearms and ammunition used by components, according to senior DHS officials. When these changes occur, additional ammunition is required to ensure officers’ proficiency on the new firearm or ammunition. For example, in August 2011, ICE management made changes to the types of authorized weapons officers could carry as their primary weapon—expanding the list of authorized 9 millimeter (mm) firearms. Prior to this change, the standard duty firearm was a .40 caliber firearm. After authorizing more 9 mm firearms, ICE experienced a 60 percent increase in the amount of 9 mm ammunition consumed, according to ICE, which resulted in ICE’s inventory of 9 mm ammunition to be depleted. According to ICE officials, ICE needed to amend the existing contract for 9 mm ammunition to allow for additional purchases of ammunition to ensure officers who chose to use those 9 mm firearms had sufficient ammunition for firearms qualification and operations. As this change came at the end of fiscal year 2011, purchases increased in fiscal year 2012, as previously shown in figure 3. Historical usage rates: Historical usage rates—that is, the amount of ammunition used in previous years—is also a factor components consider to determine how much ammunition to purchase, according to senior DHS component officials. DHS component officials told us they estimate how much ammunition they have used by reviewing how much ammunition they purchased in previous years, the number of authorized firearm-carrying personnel, and related training and qualification requirements. According to our analysis of estimated ammunition usage data provided by DHS components, for fiscal years 2008 through 2013, DHS components in our review estimated using, on average, approximately 110 million rounds of ammunition per fiscal year, with a high of about 141 million rounds in fiscal year 2009 to a low of about 89 million rounds in fiscal year 2013 (see figure 4). Ammunition inventory: The amount of ammunition components have in inventory is also a factor to determine how much and when to order more ammunition, according to DHS component officials. DHS components determine how much ammunition to keep in inventory, and when to replenish ammunition inventories. The six DHS components in our review stated they strive to maintain about 12 to 24 months’ of ammunition inventory to meet the training, qualification, and operational ammunition needs of firearm-carrying personnel. As stated earlier, there can be months-long delays between placing an order for ammunition and receiving it. To help ensure components have sufficient ammunition on hand to support the training and operational needs of their officers, DHS components maintain inventories of ammunition. For example, TSA officials said they try to keep a large ammunition inventory on hand, because the FFDOs (trained and armed pilots) are allowed to provide as little as 24 hours notice to perform their qualification, based on their fluctuating schedules. Ammunition is stored in limited supply in the field offices of the various components, as DHS component officials have said there is limited storage for ammunition in the field. Officials added that while ammunition is typically shipped directly from the manufacturers to the field offices that placed the orders, there are occasions when production delays require components to ship ammunition from their reserves stored in armories to resupply the field. It is important to note that inventory levels reflect a point in time and that inventory fluctuates throughout the year as ammunition orders arrive from manufacturers and ammunition is used for firearms qualification and other purposes in the field. Therefore, it is not possible to provide an average ammunition inventory for a year or past years, because the inventory is constantly fluctuating. Table 3 shows how much ammunition DHS components had in inventory (in both estimated rounds and approximate number of months’ supply) at three different points in time between November 2012 and October 2013, as reported to us by DHS components. We provide these estimates at three points in time to show, by component, how ammunition inventory fluctuates over the course of 1 year. Each component is responsible for maintaining and recording its own ammunition inventory, and, according to senior DHS component officials, ammunition balances may not always be up to date and generally are estimates. For example, a field location may show more ammunition in inventory than other field offices, but that could reflect that the officers have not yet completed a quarterly firearms qualification or the inventory tracking system has not yet been updated to reflect changes. In addition, components may also consider ammunition that has been ordered, but not yet arrived, to be part of their “current” inventory. For example, TSA reported an inventory level of almost 30 million rounds in November 2012, but TSA officials said that number included ammunition orders that had been placed with the manufacturer, but were not yet in physical inventory. According to TSA, the quantity of ammunition in TSA’s physical possession at the time of the inquiry was approximately 19 million rounds. Similar to DHS, the DOJ components in our review also purchase ammunition to ensure their personnel authorized to carry firearms are equipped to carry out their various law enforcement missions. For the three DOJ components in our review, ammunition purchases are driven primarily by the firearms training and qualification requirements for their firearm-carrying workforce. According to DOJ data, DOJ component firearm qualification requirements range from semi-annual to quarterly, depending on the component; however, most require quarterly firearms qualification on an officer’s primary weapon. As shown in table 4, the approximate number of rounds of ammunition purchased per firearm- carrying agent or officer for the three DOJ law enforcement agencies in our review for fiscal years 2011 through 2013 were similar to the amounts for DHS components in our review. For DOJ components, the average number of rounds of ammunition purchased per authorized firearm-carrying agent or officer per year across fiscal years 2011 through 2013 was approximately 1,300 rounds. For DHS, the average number of rounds of ammunition purchased per authorized firearm-carrying agent or officer per year for fiscal years 2011 through 2013 was approximately 1,000 rounds. Senior DOJ officials also reported similar considerations when making ammunition purchases, such as ammunition delivery lead time, quality issues, and the amount of ammunition in inventory. As with DHS officials, DOJ officials reported that the type of ammunition they use is manufactured to a higher standard than ammunition purchased by the civilian market and is not considered off-the-shelf. According to DOJ component officials, the ammunition purchased by DOJ is produced after an order is placed with a manufacturer, and the time between placing an order and receiving it can be several months. Quality testing of ammunition is also part of DOJ components’ procurement process, to help ensure ammunition performs as expected in the field. Additionally, DOJ components also consider the amount of inventory on hand when determining ammunition requirements. According to officials from the DOJ components in our review, the amount of inventory components try to maintain ranged from a minimum of 6 months to a maximum of 24 months. Ammunition inventory data provided by two of the three DOJ components indicated that inventory ranged from about 13 months’ worth to about 20 months’ worth.DHS components. Each of the DHS components in our review has policies, procedures, and processes that describe the requirements and guidance necessary to ensure inventory management, control and accountability for firearms and ammunition. These policies include describing who is responsible and accountable for firearms and ammunition, tracking and accounting for firearms and ammunition throughout their life cycles, and periodically conducting physical inventories to verify the existence and location of the firearms and ammunition recorded in the accountability records. For example, FLETC requires the firearms and weapons custodian to perform a daily audit of all ammunition issues and receipts occurring at specified issue points. TSA requires supervisors to conduct unannounced employee firearm inspections and recommends that TSA offices conduct random, selective, periodic, or unannounced internal audits to ensure proper inventory controls and accountability. In addition, ICE, CBP, and FPS use an automated system called Firearms, Armor, and Credentials Tracking System (FACTS) to provide visibility of and oversight over their firearms inventories. We did not assess if the components’ policies, procedures, and process for managing their firearms and ammunition inventories were working as intended. Department of Homeland Security Inspector General, DHS Controls over Firearms, OIG- 10-41, (Washington, D.C.: Jan. 25, 2010). report recommended, among other things, that DHS develop department-wide policies and procedures for safeguarding and controlling firearms. In response to the findings and recommendations of the report, in 2012 DHS issued the DHS Firearm Asset Policy (firearm policy) to govern the components’ management of firearms, including inventory management requirements. According to the firearm policy, components are responsible for developing their own controls and for overall management of their respective firearms programs, which includes adhering to DHS requirements for firearms management. The components are to integrate the firearm policy into their operations during fiscal year 2013, with a planned full implementation by the end of fiscal year 2014. Among other things, DHS’s firearms policy requires five specific inventory control measures, noted below: Establish policies and guidance: Components are responsible for establishing policies and guidelines to ensure that the firearm asset management system of record is updated throughout the asset life cycle to document all transactions and events. Conduct annual physical inventory: The component’s property management officer or firearm program manager is to ensure that the component’s inventory plan incorporates an annual physical inventory for all firearms. For internal control purposes, there are to be at least two individuals conducting a firearm physical inventory. Data on every firearm shall be recorded and maintained in the firearm asset management system of record. Conduct independent third party audits: Components or DHS’s Chief Readiness Support Officer’s Office are to engage independent third parties to conduct an annual audit of at least 15 percent of their firearm inventory. Ensure inventory accuracy: Supervisors are to conduct unannounced inventory verifications to ensure accuracy of firearms inventory. Verify issued firearms: In addition to an annual physical inventory, each component’s property management officers or firearm program managers are to ensure quarterly firearm inventory verifications are conducted for all firearms issued. For the six DHS components in our review, we found that each of the components established policies and guidelines to manage firearms assets, and some of the components were already implementing aspects of the new firearm policy. For example, all six components’ policies require that they conduct a complete annual physical inventory of all firearms. Two of the components’ policies require them to engage an independent third party to conduct an annual audit of at least 15 percent of their firearm inventories. In addition, three of the six components have procedures in place to ensure quarterly firearm inventory verifications are conducted for all firearms issued. DHS officials said that components are not expected to address all of the requirements in the DHS firearm policy until the end of fiscal year 2014. Therefore, it is too early to know if components will meet all of the requirements within this time frame. Component officials we spoke with said that the implementation time frame may require additional time to work through the feasibility of certain requirements. For example, ICE officials noted that components with collective bargaining units, such as CBP and ICE, for example, would need agreement by the collective bargaining unit representing a component’s law enforcement personnel before certain aspects of the new firearm policy can be implemented. DHS issued a manual in 2013 on personal property asset management that provides a general description of controls for managing property, including ammunition, which is considered a sensitive asset requiring certain controls. Specifically, with respect to inventory management of ammunition, the DHS manual requires that ammunition be physically inventoried at least annually. In accordance with the DHS property management manual, DHS relies on the components to establish specific policies and procedures for managing, safeguarding, and controlling their ammunition inventories. We found that all six DHS components in our review have policies and procedures for managing their ammunition inventories, including requiring that a physical inventory be conducted at least annually, which is in accordance with the DHS property management manual. For example, USSS guidance requires that field office personnel conduct annual inventories to ensure the completeness and accuracy of the ammunition inventory information within their inventory system. Officials from five of the six components reported that they do conduct physical inventories of their ammunition at least annually. For example, FLETC officials reported that personnel conduct an annual inventory in which they are to reconcile ammunition with control records through a dual verification process to ensure accuracy by performing a physical and visual verification. One component—ICE—reported that it does not conduct a distinct physical inventory of ammunition (that is, at one point in time reconcile ammunition on hand with ammunition expected to be in inventory). However, ICE officials stated that ammunition inventories are conducted by ICE field-level officials at various times during the year for determination of needs. According to ICE officials, ICE has treated ammunition as a consumable asset and has not subjected it to the same inventory process as firearms. Sensitive assets such as firearms are subject to the special requirements outlined in the DHS property management directive. ICE officials stated that although they have not conducted annual physical inventories, ammunition balances and inventory levels are continuously tracked in FACTS, and senior firearms instructors in local field offices manage and maintain their ammunition inventory in FACTS as well as on ammunition inventory control sheets that are maintained locally. ICE officials said beginning in fiscal year 2014, they plan to institute a separate and distinct physical inventory of ammunition in conjunction with the annual ICE-wide sensitive asset inventory. We provided a draft of this report to DHS and DOJ for official review and comment. DHS provided written comments, which are reproduced in full in appendix IV. DHS agreed that annual ammunition purchases have declined during the past 3 years and are comparable to those of DOJ law enforcement agencies. DOJ did not provide written comments on this report. DHS and DOJ provided technical comments, which we incorporated where appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Homeland Security, the Attorney General, selected congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-9627 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. 1. What are the trends in the Department of Homeland Security’s (DHS) ammunition purchases since fiscal year 2008, what factors affect its purchase decisions, and how do DHS’s purchases compare with those of the Department of Justice’s (DOJ)? 2. What policies and guidance does DHS have for managing firearms and ammunition inventories? To obtain information on DHS’s acquisition practices for ammunition, we examined the Federal Acquisition Regulation (FAR); DHS’s acquisition and procurement policies and guidance, including the Homeland Security Acquisition Regulation (HSAR); and the Homeland Security Acquisition Manual (HSAM). We reviewed prior GAO reports and a Congressional Research Service memorandum on DHS acquisition and procurement. We also reviewed contracts for ammunition purchases and related pre- award documentation, including acquisition plans, market research, and cost estimates, but we did not review whether components’ ammunition procurements complied with the FAR, HSAR, or HSAM. We interviewed the Director for Procurement Policy and Oversight, the Director for Oversight and Strategic Sourcing, and the acquisition and program officials from six DHS components including U.S. Customs and Border Protection (CBP), U.S. Immigration and Customs Enforcement (ICE), Transportation Security Administration (TSA), U. S. Secret Service (USSS), National Protection and Programs Directorate/Federal Protective Service (FPS), and Federal Law Enforcement Training Centers (FLETC) to discuss contracting procedures and practices for procuring ammunition, including the use of the Weapons Ammunition Commodity Council and strategic sourcing to coordinate and leverage their buying power. We reviewed relevant policies and guidance, examined department and component documentation, and interviewed officials from the DHS Office of the Chief Procurement Officer and components on practices they employ and contracts they have leveraged. Specifically, we interviewed officials from DHS’s strategic sourcing program office, as well as all six components, to gain an understanding of DHS’s strategic sourcing processes, the availability of strategic sourcing contracting vehicles for ammunition, and the potential of establishing component- specific strategic sourcing goals for procuring ammunition. We also interviewed Office of the Chief Procurement Officer Oversight and Pricing Branch officials regarding their tri-annual reviews of procurement operations. To determine any trends in DHS’s ammunition purchases since fiscal year 2008, we obtained available data from DHS law enforcement components with firearm-carrying personnel regarding their ammunition purchases, costs, usage, and the size of the authorized firearm-carrying workforce for fiscal years 2008 through 2013. Specifically, we selected all DHS law enforcement components with firearm-carrying personnel— CBP, ICE, TSA, USSS, FPS—as well as FLETC. The number of TSA federal air marshals and the number of federal flight deck officers authorized by TSA to carry firearms is considered sensitive security information. Accordingly, we excluded that information from this report. We also obtained data from DHS components on their estimated ammunition inventory balances for three points in time—November 2012, April 2013, and October 2013. Because we determined that it is not possible to provide an average ammunition inventory for a year or past years, as ammunition inventory is constantly fluctuating, we selected these points in time based on data that was previously reported by DHS to Congress prior to our review as well as data available at the start and end of our review. In addition, DHS components’ records may not always reflect the exact quantities of the ammunition inventory. For example, depending on how often components update their inventory data, the time lag may result in differences between quantities of ammunition actually available and those reflected on the inventory records. We also obtained data on planned ammunition purchases for fiscal year 2014. In deciding which DHS components to examine in our review, we excluded components that were not law enforcement components and had small numbers of personnel authorized to carry firearms and made small purchases of ammunition, such as the Office of the Inspector General and the Federal Emergency Management Agency (FEMA). We also excluded the U.S. Coast Guard—a DHS component with firearm- carrying personnel—because the Coast Guard does not procure ammunition through DHS; rather, the U.S. Coast Guard procures its ammunition through Department of Defense contracts. Similarly, we selected all DOJ law enforcement components with firearm- carrying personnel regarding their ammunition purchases, costs, usage, and size of the workforce authorized to carry firearms for fiscal years 2011 through 2013. The specific DOJ law enforcement components included in our review are the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF), Federal Bureau of Investigation (FBI), and U.S. Marshals Service. The Federal Bureau of Prisons could not provide ammunition purchase data comparable to that of the other components based on its method of record keeping and data retention policy. We determined that data from the Drug Enforcement Administration (DEA) were not sufficiently reliable for our purposes, because DEA could only provide estimates of purchases. We, therefore, excluded both agencies from our scope. We included the selected DOJ law enforcement agencies in our review to provide perspective and context to help understand DHS’s ammunition purchases and usage relative to those of other federal law enforcement agencies. We excluded FLETC from this analysis because FLETC has few firearm-carrying officers for which it purchases ammunition. Ammunition purchased by FLETC is utilized by all students who train on firearms at its facilities, including DHS and other federal law enforcement personnel, as well as state, local, tribal, and international law enforcement personnel. authorized to carry a firearm to account for the varying sizes of the different departments and components. We selected DOJ to provide a comparison with DHS’s ammunition purchases as it has the second largest number of personnel authorized to carry firearms. However, our DOJ analysis is not to suggest that the department represents a model or standard against which DHS is assessed. Differences in the amount of ammunition per firearm-carrying personnel reflect a number of factors, including unique mission requirements and training needs. To assess the reliability of the data we obtained from DHS components on ammunition purchases and costs, we reviewed the extent to which the components have procedures and controls for ensuring that the data are consistent and accurate, and interviewed knowledgeable officials responsible for collecting and reporting the data. In addition, for the data we obtained from DHS components on ammunition purchases and costs, we compared it with corroborating evidence to determine data consistency and reasonableness. Specifically, we obtained information on ammunition procurements using obligations data from fiscal years 2008 through 2012, and part of fiscal year 2013 from the Federal Procurement Data System (FPDS), which tracks all contracts using appropriated funds government-wide. We used the obligations data from the FPDS and compared ammunitions obligations for DHS with the sum of ammunition costs as provided to us by each of the components. We also compared estimated ammunition used with ammunition purchased under the assumption that in a given year, components would purchase ammunition to replace what was used. We calculated how much ammunition DHS components might be expected to use in a given year using data, provided by DHS, on the size of the firearm-carrying workforce, the number of staff qualified on various types of firearms, and the number of rounds of ammunition typically allotted for training and qualifications. We also reviewed selected DHS ammunition procurement contracts, which were readily available, for information on ammunition purchases, costs, and stated requirements. We found the data on ammunition purchases and costs for the DHS components to be sufficiently reliable for our purposes. To assess the reliability of the data we obtained from DHS components on ammunition inventory, we reviewed the extent to which the agencies have procedures and controls for ensuring that the data are consistent and accurate, and interviewed knowledgeable officials responsible for collecting and reporting the data. To determine the approximate number of months’ supply of ammunition in inventory for each component, we used fiscal year 2013 estimated ammunition usage data provided by DHS components and divided that number by 12 to get an estimate of monthly ammunition usage. We then divided the components’ inventory figures by the monthly ammunition usage estimate to get estimated months’ worth in inventory for those three points in time. Each component has policies in place to maintain and record its own ammunition inventory, but according to officials, ammunition balances may not always be up to date and generally are estimates. For example, depending on how often components update their inventory data, the time lag may result in differences between quantities of ammunition actually available and those reflected on the inventory records. On the basis of this information, we determined the data to be sufficiently reliable to be reported rounded to the millions. To assess the reliability of the data we obtained from DHS components on ammunition usage, we interviewed knowledgeable officials responsible for collecting and reporting the data. DHS components do not centrally track historical ammunition usage, in large part because their data management systems are not designed to track ammunition usage. However, components did provide us estimates of ammunition usage for fiscal years 2008 through 2013 based on how much ammunition they purchased in previous years, the number of firearm-carrying personnel, and related training and qualification requirements. We determined these data were sufficiently reliable to report general trends in usage when rounded to the hundred thousands. DHS components provided us with data on the number of personnel authorized to carry firearms for fiscal year 2008 through 2013 and information about how they compiled these data. Data sources varied by component and included personnel systems, the Firearms, Armor, and Credentials Tracking System (FACTS), and data from the National Finance Center. Workforce data provided by CBP is not precise because it included a rounded number of personnel for one of CBP’s sub- components. Therefore, we report CBP’s data rounded to the thousands. On the basis of this information, we determined these data were sufficiently reliable to be used for background purposes and in our calculations of the average number of rounds purchased per agent or officer. To assess the reliability of the inventory, workforce, and ammunition purchase data we obtained from DOJ, we interviewed knowledgeable officials responsible for collecting and reporting the data about the extent to which DOJ components have procedures and controls in place for ensuring that the data are consistent and accurate. We calculated the DOJ inventory months’ worth estimates in the same way we did for DHS components. On the basis of this information, we determined these data to be sufficiently reliable to report rounded estimates based on these data. We interviewed officials from the six DHS components in our review to discuss the ammunition procurement process and how ammunition requirements are determined at the component level and the purpose of quality testing the ammunition, and to understand the context of why proficiency among firearm-carrying personnel is important. In addition, DHS components provided examples of purchase and delivery orders in which the components experienced a variation in lag time between placing a purchase order and receiving a shipment of ammunition, which can result from the manufacturing lead time associated with ammunition purchases. Finally, for additional context, we conducted a site visit to the ICE National Firearms Testing Lab, which performs quality control testing on ammunition and firearms, and maintains an inventory of ammunition and firearms for distribution to DHS field locations. To address our second question, we examined the general requirements and policies DHS and its components had to manage and oversee firearms and ammunition, but we did not assess whether the directives, policies, and guidance were working as intended or if component personnel were adhering to them, as that was outside the scope of our review. We examined agency-wide directives and guidance, and component management policies and procedures for managing ammunition and firearms. This included reviewing the firearm policy manuals or related documentation from CBP, FLETC, FPS, ICE, TSA, and USSS to determine whether the manuals specifically addressed agency-wide firearm and ammunition inventory management requirements. We also interviewed officials from the Office of the Chief Readiness Support Officer and from DHS components, and reviewed components’ written responses regarding their policies and guidance for managing firearms and ammunition. In addition, we interviewed component officials responsible for conducting management and compliance reviews of component operations, including on-site review and self-inspection program findings and recommendations, which include, but are not limited to, ammunition and firearms management and controls. We also reviewed a 2010 DHS Inspector General report on DHS firearms controls. We conducted this performance audit from May 2013 to January 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Figure 5 shows some of the most commonly purchased types of ammunition for ICE, and figure 6 shows firearms commonly issued by DHS components. Most of DHS’s ammunition contracts, whether strategically sourced or individual contracts, are indefinite delivery, indefinite quantity (IDIQ) contracts, which are typically negotiated for a base year with additional options for purchasing ammunition up to a certain maximum number of rounds, or contract ceiling. As shown in the table below, the 29 existing DHS ammunition contracts extend over the next 4 fiscal years and have a remaining contract limit of approximately 704 million rounds (for various ammunition types) if every option for purchasing ammunition were exercised into fiscal year 2018. DHS’s strategic sourcing contract vehicles include contracts or agreements that have been established for use by two or more components to leverage their buying power and secure competitive prices resulting in cost savings through collective procurement actions (e.g., buying in bulk quantities at lower prices). Strategically sourced contracts are shaded. In addition to the contact named above, Adam Hoffman (Assistant Director) and Daniel Blinderman, Billy Commons, Lorraine Ettaro, Emily Gunn, Eric Hauswirth, Susan Hsu, Susanna Kuebler, Gary Malavenda, Linda Miller, and Anthony Pordes made significant contributions to this report.
DHS and its components have homeland security and law enforcement missions that require agents and officers to carry and be proficient in the use of firearms. DHS has more than 70,000 firearm-carrying personnel—the most of any department. DOJ has the next largest with approximately 69,000 firearm-carrying personnel. GAO was asked to examine DHS's ammunition purchases and management of ammunition and firearms. This report addresses trends in DHS's ammunition purchases since fiscal year 2008, how DHS's purchases compare with DOJ's, and what factors affect DHS's purchase decisions. GAO analyzed data from six DHS and three DOJ components that have law enforcement missions, require agents and officers to carry firearms, and purchase ammunition themselves or through their respective departments. Specifically, GAO analyzed data on ammunition purchases, usage, costs, and inventories, among other things, for fiscal years 2008 through 2013 for DHS, and for fiscal years 2011 through 2013 for DOJ. GAO assessed the reliability of these data and found them sufficiently reliable. Data on DOJ ammunition purchases prior to fiscal year 2011 were not readily available; therefore, GAO excluded them, as discussed in the report. The Department of Homeland Security's (DHS) annual ammunition purchases have declined since fiscal year 2009 and are comparable in number to the Department of Justice's (DOJ) ammunition purchases. In fiscal year 2013, DHS purchased 84 million rounds of ammunition, which is less than DHS's ammunition purchases over the past 5 fiscal years, as shown in the figure below. DHS component officials said the decline in ammunition purchases in fiscal year 2013 was primarily a result of budget constraints, which meant reducing the number of training classes, and drawing on their ammunition inventories. From fiscal years 2008 through 2013, DHS purchased an average of 109 million rounds of ammunition for training, qualification, and operational needs, according to DHS data. DHS's ammunition purchases over the 6-year period equates to an average of 1,200 rounds purchased per firearm-carrying agent or officer per year. Over the past 3 fiscal years (2011-2013), DHS purchased an average of 1,000 rounds per firearm-carrying agent or officer and selected DOJ components purchased 1,300 rounds per firearm-carrying agent or officer. DHS ammunition purchases are driven primarily by firearm training and qualification requirements. Most DHS firearm-carrying personnel are required to qualify four times per year, though requirements vary by component, as do the number of rounds of ammunition typically used for training and qualification. DHS components also reported considering other factors when making ammunition purchase decisions, such as changes in firearms, usage rates, and ammunition inventories. DHS components maintain inventories of ammunition to help ensure they have sufficient ammunition for the training and operational needs of their officers, as there can be months-long delays between placing an order for ammunition and receiving it. As of October 2013, DHS estimates it had approximately 159 million rounds in inventory, enough to last about 22 months to meet the training and operational needs of its firearm-carrying personnel. Ammunition inventory data provided by DOJ components indicated that inventory ranged from about 13 months' worth to about 20 months' worth. GAO is not making any recommendations.
DOD is a massive and complex organization. In fiscal year 2004, the department reported that its operations involved $1.2 trillion in assets, $1.7 trillion in liabilities, over 3.3 million military and civilian personnel, and over $605 billion in net cost of operations. For fiscal year 2005, the department received appropriations of about $417 billion. The department comprises a wide range of organizations, including the military services and their respective major commands and functional activities, numerous defense agencies and field activities, and various combatant and joint operational commands, which are responsible for military operations for specific geographic regions or theaters of operations. In support of its military operations, the department performs an assortment of interrelated and interdependent business functions, including logistics management, procurement, health care management, and financial management. Earlier this year, DOD reported that, in order to support these business functions, it relied on about 4,200 business systems, for which the department received approximately $13.3 billion in fiscal year 2005 for operations, maintenance, and modernization. For fiscal year 2006, DOD received approximately $15.5 billion to operate, maintain, and modernize its business systems. As we have previously reported, DOD’s systems environment is overly complex and error prone and is characterized by (1) little standardization across the department, (2) multiple systems performing the same tasks, (3) the same data stored in multiple systems, and (4) the need for manual data entry into multiple systems. In addition, our reports continue to show that the department’s nonintegrated and duplicative systems contribute to fraud, waste, and abuse. Of the 25 areas on GAO’s governmentwide high-risk list, 8 are DOD program areas, and the department shares responsibility for 6 other governmentwide high-risk areas. DOD’s business systems modernization is one of the high-risk areas. Effective use of an enterprise architecture, or a modernization blueprint, is a hallmark of successful public and private organizations. For more than a decade, we have promoted the use of architectures to guide and constrain systems modernization, recognizing them as a crucial means to a challenging goal: agency operational structures that are optimally defined in both the business and technological environments. Congress, the Office of Management and Budget (OMB), and the federal Chief Information Officer (CIO) Council have also recognized the importance of an architecture-centric approach to modernization, and OMB and the CIO Council, in collaboration with us, have issued enterprise architecture guidance. The Clinger-Cohen Act of 1996 mandates that an agency’s CIO develop, maintain, and facilitate the implementation of an IT architecture. Further, the E-Government Act of 2002 requires OMB to oversee the development of enterprise architectures within and across agencies. In addition, we and OMB have issued guidance that, among other things, emphasizes the need for system investments to be consistent with these architectures. A corporate approach to IT investment management is also characteristic of successful public and private organizations. Recognizing this, Congress developed and enacted the Clinger-Cohen Act in 1996, which requires OMB to establish processes to analyze, track, and evaluate the risks and results of major capital investments in information systems made by executive agencies. In response to the Clinger-Cohen Act and other statutes, OMB developed policy for planning, budgeting, acquisition, and management of federal capital assets and issued guidance. We have also issued guidance in this area, which defines institutional structures, such as investment review boards, and associated processes, such as common investment criteria. An enterprise architecture provides a clear and comprehensive picture of an entity, whether it is an organization (e.g., a federal department) or a functional or mission area that cuts across more than one organization (e.g., financial management). This picture consists of snapshots of both the enterprise’s current or “As Is” environment and its target or “To Be” environment. These snapshots consist of “views,” which are one or more architecture products (e.g., models, diagrams, matrixes, and text) that provide logical or technical representations of the enterprise. The architecture also includes a transition or sequencing plan, which is based on an analysis of the gaps between the “As Is” and “To Be” environments; this plan provides a temporal roadmap for moving between the two environments that incorporates such considerations as technology opportunities, marketplace trends, fiscal and budgetary constraints, institutional system development and acquisition capabilities, new and legacy system dependencies and life expectancies, and the projected value of competing investments. The suite of products produced for a given entity’s enterprise architecture, including their structure and content, are largely governed by the framework used to develop the architecture. Since the 1980s, various architecture frameworks have emerged and been applied. Appendix III provides a discussion of these various frameworks. The importance of developing, implementing, and maintaining an enterprise architecture is a basic tenet of both organizational transformation and systems modernization. Managed properly, an enterprise architecture can clarify and help to optimize the interdependencies and relationships among an organization’s business operations and the underlying IT infrastructure and applications that support these operations. To support effective architecture management in the federal government, we have issued architecture management guidance, as has the federal CIO Council and OMB. This guidance recognizes that when an enterprise architecture is employed in concert with other important management controls, such as portfolio-based capital planning and investment control practices, architectures can greatly increase the chances that an organization’s operational and IT environments will be configured to optimize its mission performance. Our experience with federal agencies has shown that investing in IT without defining these investments in the context of an architecture often results in systems that are duplicative, not well integrated, and unnecessarily costly to maintain and interface. IT investment management is a process for linking IT investment decisions to an organization’s strategic objectives and business plans. Generally, it includes structures (including decision-making bodies known as Investment Review Boards), processes for developing information on investments (such as costs and benefits), and practices to inform management decisions (such as whether a given investment is aligned with an enterprise architecture). The federal approach to IT investment management is based on establishing systematic processes for selecting, controlling, and evaluating investments that provides a systematic way for agencies to minimize risks while maximizing the returns of investments. During the selection phase, the organization (1) identifies and analyzes each project’s risks and returns before committing significant funds to any project and (2) selects those IT projects that will best support its mission needs. During the control phase, the organization ensures that, as projects develop and investment expenditures continue, the project is continuing to meet mission needs at the expected levels of cost and risk. If the project is not meeting expectations or if problems have arisen, steps are quickly taken to address the deficiencies. During the evaluation phase, actual versus expected results are compared once a project has been fully implemented. This is done to (1) assess the project’s impact on mission performance, (2) identify any changes or modifications to the project that may be needed, and (3) revise the investment management process based on lessons learned. Consistent with our architecture management framework, our investment management framework recognizes the importance of an enterprise architecture as a critical frame of reference for organizations making IT investment decisions, stating that only investments that move the organization toward its target architecture, as defined by its sequencing plan, should be approved, unless a waiver is provided or a decision is made to modify the architecture. Moreover, this framework states that an organization’s policies and procedures should describe the relationship between its architecture and its investment decision-making authority. Our experience has shown that mature and effective management of IT investments can vastly improve government performance and accountability, and can help to avoid wasteful IT spending and lost opportunities for improving delivery of services to the public. The Business Management Modernization Program was established in July 2001 in order to improve the efficiency and effectiveness of DOD’s business operations through, among other things, the development and implementation of an architecture. When the program was initially established, the Secretary assigned oversight responsibility to the Under Secretary of Defense (Comptroller), in coordination with the Under Secretary of Defense for Acquisition, Technology, and Logistics and the Assistant Secretary of Defense (Networks and Information Integration)/Chief Information Officer. In 2001, the Comptroller established several governance bodies and assigned them responsibilities associated with developing, maintaining, and implementing the architecture. Specifically, the Comptroller established (1) the Executive and Steering Committees—which were made up of senior leaders from across the department—to provide program guidance; (2) a program office to execute daily program activities necessary to develop, maintain, and implement the architecture; and (3) domain owners, who were responsible for achieving business transformation, implementing the architecture, developing and executing the transition plan, and performing portfolio management. In 2003, the Comptroller also established the Domain Owners Integration Team, which comprised various senior executives from each domain and the director of the program office. This team reported to the steering committee and was responsible for facilitating communication and coordination across the domains for program activities, including extending and evolving the architecture. In 2005, the department revised the program’s governance structure. Program direction and oversight is now provided by the Deputy Secretary through the dual leadership of the Under Secretary of Defense for Acquisition, Technology, and Logistics and the Under Secretary of Defense (Comptroller). In addition, DOD has reassigned responsibility for providing executive leadership for the direction, oversight, and execution of its business transformation and systems modernization efforts to several entities. These entities include the Defense Business Systems Management Committee (DBSMC), which serves as the highest ranking governance body for business systems modernization activities; the Principal Staff Assistants, who serve as the certification authorities for business system investments in their respective core business missions; and the Investment Review Boards, which form the review and decision-making bodies for business system investments in their respective areas of responsibility. Table 2 lists these entities and their roles and responsibilities. DOD has defined five departmentwide core business missions to be addressed through identification of corporate business needs and analysis of capability gaps. The core business missions transcend DOD’s various functional areas (e.g., planning, budgeting, information technology, procurement, and maintenance) and are intended to be the means through which end-to-end warfighter support is delivered. Responsibility for the core business missions is assigned to specific Principal Staff Assistants. Table 3 provides descriptions of the core business missions and associated responsible parties. On October 7, 2005, DOD established the Business Transformation Agency (BTA) to advance DOD-wide business transformation efforts, particularly with regard to business systems modernization. The BTA reports directly to the vice chair of the DBMSC. Among other things, the BTA includes a Defense Business Systems Acquisition Executive who is to be responsible for centrally managing 28 DOD-wide business projects, programs, systems, and initiatives. In addition, the BTA is to be responsible for integrating and supporting the work of the Office of the Secretary of Defense Principal Staff Assistants, who include the approval authorities that chair the business system investment review boards. Until a permanent director is named, the Deputy Under Secretary of Defense for Business Transformation and the Deputy Under Secretary of Defense for Financial Management will jointly function as directors and will report to the vice chair of the DBSMC. According to a program official, the department has spent approximately $440 million on the Business Management Modernization Program since it was established in 2001. Since 2001, we have regularly reported on DOD’s efforts to develop an architecture and to establish and implement effective investment management structures and processes. Our reports have continued to identify problems and raise concerns about the department’s architecture program, the quality of the architecture and the transition plan, and the lack of an investment management structure and controls to implement the architecture. Our most recent reports, which were issued in the third and fourth quarters of fiscal year 2005, made the following points: DOD had not established effective structures and processes for managing the development of its architecture. For example, the department had yet to finalize, approve, and effectively implement its plan, procedures, and charter governing the configuration management process. In addition, DOD had yet to establish an independent quality assurance function that addressed process standards and program performance. DOD had not developed a well-defined architecture. The products that it had produced did not provide sufficient content and utility to effectively guide and constrain ongoing and planned systems investments. For example, the latest versions of the architecture did not include products describing the “As Is” business and technology environments. Further, although these versions included products describing the “To Be” environment, the descriptions were inadequate because the descriptions (1) did not have a clearly defined purpose that linked to the goals and objectives of the architecture; (2) were missing important content, such as the actual systems to be developed or acquired to support future business operations and the physical infrastructure needed to support the business systems; and (3) contained products that were neither consistent nor integrated. In short, the “To Be” environment lacked the detail needed to provide DOD with a common vision for defining the transition plan and informing investment decision making. DOD had not developed a plan for transitioning from the “As Is” to the “To Be” architectural environments. The transition plan is based on an analysis of the gaps between these two environments and serves as an enterprisewide IT capital investment plan and acquisition strategy. DOD did not have an effective departmentwide management structure for controlling its business investments. Although the department had established organizations to oversee its business system investments, these organizations were unable to do so, because the components controlled budget authority and continued to make their own parochial investment decisions. DOD had not established common investment criteria for system reviews, and as a result different organizations were using different criteria. DOD also had not conducted a comprehensive review of its ongoing business system investments. DOD had not included all of the reported systems in its fiscal year 2005 IT budget request. It lacked accurate information on the costs and number of its business systems. The Under Secretary of Defense (Comptroller) had not certified all systems investments with reported obligations exceeding $1 million, as required by the fiscal year 2003 National Defense Authorization Act. Obligations totaling about $243 million were made for systems modernizations in fiscal year 2004 that were not referred to the DOD Comptroller for the required review. Section 2222 of Title 10, United States Code, as added by section 332 of the defense authorization act for fiscal year 2005, cites six requirements that DOD is required to meet.Generally, these are as follows: 1. By September 30, 2005, develop a business enterprise architecture that meets certain requirements. 2. By September 30, 2005, develop a transition plan for implementing the architecture that meets certain requirements. 3. Identify each business system proposed for funding in DOD’s fiscal year 2006 and subsequent budget submissions and identify funds for current services and business systems modernization. 4. Delegate the responsibility for business systems to designated approval authorities within the Office of the Secretary of Defense. 5. By March 15, 2005, require each approval authority to establish a business system investment review process. 6. Effective October 1, 2005, obligate funds for business system modernizations with a total cost exceeding $1 million only after the system is certified by the designated approval authority and the certification is approved by the DBSMC. DOD has partially satisfied the four legislative provisions relating to architecture development, transition plan development, budgetary disclosure, and investment review; it has satisfied the provision concerning designated approval authorities; and it is in the process of satisfying the provision for systems costing in excess of $1 million. According to DOD, the requirements of each provision will be fully implemented under its incremental approach to developing the architecture and transition plan, and its tiered accountability approach to business system investment management. Until they are, the department’s business systems modernization program will continue to be a high-risk endeavor. The defense authorization act for fiscal year 2005 requires DOD to develop a business enterprise architecture by September 30, 2005. According to the act, the architecture must satisfy three major requirements:1. It must include an information infrastructure that, at a minimum, would enable DOD to comply with all federal accounting, financial management, and routinely produce timely, accurate, and reliable financial information for integrate budget, accounting, and program information and systems; provide for the systematic measurement of performance, including the ability to produce timely, relevant, and reliable cost information. 2. The architecture must include policies, procedures, data standards, and system interface requirements that are to be applied uniformly throughout the department. 3. The architecture must be consistent with OMB policies and procedures. On September 28, 2005, the Acting Deputy Secretary of Defense approved Version 3.0 of the business enterprise architecture. According to DOD, this version is intended to provide a blueprint to help ensure near-term delivery of the right capabilities, resources, and materiel to the warfighter. To do so, this version focused on six business enterprise priorities, which DOD states are short-term objectives to achieve immediate results. These priorities are Personnel Visibility, Acquisition Visibility, Common Supplier Engagement, Materiel Visibility, Real Property Accountability, and Financial Visibility. According to DOD, these priorities will evolve and expand in future versions of the architecture. Table 4 provides a brief description of each of the six business enterprise priorities. In addition to focusing the scope of Version 3.0 of the architecture on these priorities, the extent to which each priority was to be addressed, according to DOD, was limited to answering four key questions: Who are our people, what are their skills, and where are they located? Who are our industry partners, and what is the state of our relationship with them? What assets are we providing to support the warfighter, and where are these assets deployed? How are we investing our funds to best enable the warfighting mission? To produce a version of the architecture according to the above scope, DOD created 12 of the 26 recommended products identified in the DOD Architecture Framework (DODAF)—the structural guide that the department has established for developing an architecture—including 7 products that the DODAF designates as essential. Table 5 shows the DODAF products included in the architecture. (See app. IV for a complete list of the DODAF products.) Version 3.0 of DOD’s business enterprise architecture partially satisfies each of the three major requirements specified in the act. With respect to the first requirement, regarding an information infrastructure, the act cites four requirements, each of which Version 3.0 partially addresses, as described below. Comply with federal accounting, financial management, and reporting requirements. Partial compliance is achieved based on the architecture’s inclusion of the Standard Financial Information Structure (SFIS), which includes a Standard Accounting Classification Structure (SACS) that can allow DOD to standardize financial data elements necessary to support budgeting, accounting, cost/performance management, and external reporting. The SFIS and SACS are based upon mandated requirements defined by external regulatory entities, such as the U.S. Treasury, OMB, the Federal Accounting Standards Advisory Board, and the Joint Financial Management Improvement Program. As a result, SFIS can enable compliance with these entities’ requirements if implemented properly. SFIS, while not complete, has been used to develop and incorporate business rules in the architecture for such areas as managerial cost accounting, general ledger, and federally owned property. Business rules are important because they explicitly translate important business policies and procedures into specific, unambiguous rules that govern what can and cannot be done. However, the architecture does not provide for compliance with all federal accounting, financial, and reporting requirements. For example, it does not do the following: It does not contain the information needed to achieve compliance with the Department of the Treasury’s United States Standard General Ledger. In particular, the logical data model (OV-7) does not contain all the data elements or attributes that are needed to facilitate information sharing and reconciliation with the Treasury. The architecture also does not include a strategy for achieving compliance with the Treasury’s general ledger. For example, it does not state whether DOD will adopt the Treasury data model or simply map its data model to the one for the Treasury. Program officials agreed and stated that this limitation is being reviewed and may be addressed in Version 3.1 of the architecture. It does not address the locations where specified activities are to occur and where the systems are to be located. Program officials agreed; however, they stated that the architecture is not intended to include this level of detail because it is capabilities-based rather than solutions-based and that this information will be contained either within the department’s Global Information Grid or individual system programs’ documentation. We disagree with the department’s position that information pertaining to locations is better captured in a solutions-based architecture rather than in the business enterprise architecture. The identification of operationally significant and strategic business locations, as well as the need for a business logistics model, is a generally accepted best practice for defining the business operations. This is because the cost and performance of implemented business operations and technology solutions are affected by where they are located, and thus need to be examined, assessed, and decided in an enterprise context, rather than in a piecemeal systems-specific fashion. Routinely produce timely, accurate, and reliable financial information for management purposes. Partial compliance is achieved in light of the financial information that is to be produced through (1) SFIS, which can support data accuracy, reliability, and integrity requirements for budgeting, financial accounting, cost and performance management, and external reporting across DOD, and (2) a “Manage Business Enterprise Reporting” system function, which is intended to support the reporting of financial management and program performance information, including agency financial statements. However, as previously discussed, SFIS is not complete and has yet to be implemented. Moreover, accurate and reliable information depends, in part, on using standard definitions of key terms in the architecture. The architecture does not include definitions for all such terms. In particular, the department has yet to define all enterprise-level terms, meaning terms relating to information that needs to be aggregated to support DOD-wide reporting. For example, in Version 3.0 of the architecture, terms such as “balance forwarded” and “receipt balances” were not defined in the integrated dictionary, even though these terms were used in process descriptions. In the absence of these definitions, component organizations (military services, defense agencies, and field activities) could continue to use local terms and definitions. Such locally meaningful terms cannot be reliably and accurately aggregated to permit DOD-wide visibility, as defined by the department’s business enterprise priorities. This inability to aggregate information for reporting purposes has historically required the department to produce financial information through inefficient methods (e.g., data calls or data translations), which have proven neither accurate nor timely. Program officials agreed and stated that they are currently working to complete SFIS and that they would continue to incorporate and define terms as appropriate as the architecture is evolved. Integrate budget, accounting, and program information and systems. Partial compliance is accomplished through information and systems that are to be integrated using (1) an enterprise-level automated reporting system known as Business Enterprise Information Services (BEIS), which is intended to provide timely, accurate, and reliable business information across the department to support auditable financial statements and provide detailed financial information visibility for management in support of the warfighter, and to integrate budget, accounting, and program information that is widely dispersed among systems and organizations across the department; (2) a generic system entity called “Financial Management System Entity,” which is to roll up component-level systems, or potential systems, that support current or future interface requirements; (3) the “Manage Business Enterprise Reporting” system function, which is to aggregate and distribute information according to requirements; and (4) other architectural elements, such as definitions and standards of data exchanges to ensure that the data can be mutually understood, received, processed, and potentially aggregated and analyzed, as well as some terms used in the architecture. However, the architecture does not include certain elements: It does not include a fully defined and yet to be implemented SFIS— that is, an SFIS that includes all data exchanges as well as the business rules that are to be automated by SFIS, BEIS, and user activities, and are to be supported by procedure manuals. It does not include all systems needed to achieve integration, as evidenced by instances in which the architecture provides “placeholders” or generic references for yet to be defined future systems (e.g., Financial Management System Entity). Program officials agreed and stated that these systems would be added as solutions are defined to address identified capability gaps. Systematic measurement of performance, including the ability to produce timely, relevant, and reliable cost information. Partial compliance is achieved via identification of operational activities that are to be established to monitor the performance of the DOD business mission area and to develop performance plans that include performance levels, outcomes, and expected risks. However, the architecture does not do the following: It does not provide for the systematic measurement of performance, because it has not established operationally acceptable performance thresholds for such measures as timeliness, accuracy, and reliability of financial information. These operative thresholds have significant influence on how business process activities are to be organized and controlled. Program officials agreed and stated that this issue is being addressed. It does not describe the “As Is” business and technology environments needed to conduct the gap analysis that is to show the performance shortfalls to be addressed, and thus it does not provide the underlying basis for the transition plan. Program officials agreed that the architecture does not contain an “As Is” architecture description. They stated that they have nevertheless examined the “As Is” conditions in identifying the “To Be” solutions in the architecture. They also stated that they recognize that these “As Is” conditions are not in the architecture and they have yet to be provided to us, and that they need to link this information to the “To Be” architecture. With respect to the act’s second requirement, that the architecture includes policies, procedures, data standards, and system interface requirements to be applied departmentwide, Version 3.0 partially complies. In particular, the architecture identifies federal guidance relevant to core business missions, such as the financial management and the human resources missions. In addition, the architecture identifies a specific policy entitled “Supply Chain Materiel Management Policy”—dated April 22, 2004—that is relevant to guiding and controlling the department’s core business mission and business processes for materiel and logistics. Moreover, the architecture identifies conceptual, operational, and automated business rules that can be used to govern the implementation of systems investments in accordance with policies. However, not all relevant policies are included in the architecture. For example, policies governing the development, maintenance, and implementation of the architecture are not included. Program officials agreed, and stated that the decision memorandums that were used to guide the development of Version 3.0 will be formalized as a departmental policy. In addition, Version 3.0 of the architecture includes a logical data model (OV-7) that contains data entities, attributes, and their relationships and an enterprise Technical Standards Profile (TV-1) that comprises a list of data standards (e.g. Extensible Markup Language 1.0 data exchange standard); however, the architecture does not include a systems standards profile that would ensure data sharing and interoperability among departmentwide business systems. Version 3.0 also identifies some, but not all, system interface requirements. For example, the architecture has yet to identify interface requirements with DOD systems that provide infrastructure services, such as network routing. Program officials acknowledged that the architecture does not include a systems standards profile and all system interface requirements and stated that they will address this in future versions. With respect to the act’s third requirement, that the architecture be consistent with OMB policies and procedures, Version 3.0 partially complies. According to OMB guidance, an enterprise architecture should describe the “As Is” and “To Be” environments and a transition plan. Further, this guidance requires the architecture to include, among other things, the following: Business processes. The work performed to support the agency’s mission, vision, and performance goals. Agencies must also document change agents, such as legislation or new technologies that will drive architecture changes. Information flow and relationships. The information used by the agency in its business processes, including where it is used and its movement among locations. These information flows are intended to show what information is needed where and how the information is shared to support mission functions. Technology infrastructure. The functional characteristics, capabilities, and interconnections of the hardware, software, and telecommunications. Security architecture. The support provided to secure information, systems, and operations. Version 3.0 of the architecture includes a “To Be” architecture and a transition plan; however, it does not include an “As Is” architecture, which is essential to performing a gap analysis to identify capability and system performance shortfalls that the transition plan is to address. As previously discussed, program officials agreed and stated that they plan to address this. In addition: Version 3.0 defines some of the business processes at a high level. However, it does not include all business processes. For example, the architecture does not describe key aspects of the planning, programming, budgeting, and execution processes. In particular, the architecture does not yet define a clear planning process that balances requirements with resources and provides direction for execution. It includes information flows and relationships. It does not include a description of the technology infrastructure. It does not include a security architecture. Beyond the above described areas in which Version 3.0 of the business enterprise architecture does not fully satisfy the requirements in the fiscal year 2005 defense authorization act, Version 3.0 has other limitations. Specifically: The scope of Version 3.0 is not fully consistent with the scope of the enterprise transition plan. For example, we identified 21 systems in the architecture that are not included in the transition plan’s “Master List of Systems and Initiatives” that support the business enterprise priorities and should therefore be funded. Instead of being on this master list, 19 of these 21 systems are included in the transition plan as part of a master list of “Non-priority DOD programs.” Therefore, the systems identified as targeted solutions in the architecture are not being recognized in the transition plan as systems to be funded to provide the needed business capabilities. The remaining 2 of the 21 systems, “Industry System” and “Unstructured Data Sources,” are not identified at all in the transition plan. As a result, the transition plan does not yet explicitly recognize the need to transition to the capabilities implied by these two systems, or else these systems exceed the scope of the transition plan, the Overview and Summary Information product (AV-1), or both. In addition, the AV-1 states that the scope of Version 3.0 is limited to the six DOD business enterprise priorities. In contrast, the list of “Non-priority DOD programs” in the transition plan is described as a listing of systems “that are not DOD Enterprise or Component Priority Programs” and thus would not be targeted solutions for the business enterprise priorities. As a result, the stated scope of the AV-1 is narrower than the implied scope of the transition plan. The transition plan treats certain entities, such as the Financial Management System Entity, as system solutions in the Master List of Systems, whereas Version 3.0 treats these entities as contextual placeholders. This difference is not explained. Finally, another system (the Expeditionary Combat Support System) is explicitly related to four business enterprise priorities (Financial Visibility, Acquisition Visibility, Materiel Visibility, and Common Supplier Engagement) in the Master List of Systems in the transition plan, but it is not included in the architecture. Version 3.0 refers to “Recruit Candidate” as a needed business capability, but this capability is not reflected in the transition plan. This is important because needed capabilities in the architecture should be reflected in the transition plan to ensure that they are addressed. As another example, “Access Candidate” is referred to as a needed business capability in the transition plan, but it is defined as an existing operational activity in the architecture. If it is in fact an operational activity, this means that the department plans to invest resources to achieve a business capability to address a performance shortfall that does not exist. Program officials stated that these are errors and that they will be corrected. Version 3.0 does not explicitly state the time frame covered for the “To Be” environment. Rather, it describes the time frame as being “near-term To Be,” but it does not clearly define what is meant by “near-term,” nor does it link this time frame to the milestones associated with the business enterprise priorities or the capabilities and systems in the transition plan. According to relevant guidance, the “To Be” architecture should be fiscally and technologically achievable, and therefore it should generally project 3 to 5 years into the future to accommodate rapid changes in technologies, changes in mission focus and priorities, and uncertainty in future resource availability. Program officials agreed and stated that they would use “near-term” consistently in future versions of the architecture and transition plan. Version 3.0 does not represent a fully integrated set of architecture products, although we did find greater product integration than in prior versions of the architecture. Examples of instances in which product integration was not apparent follow. First, the Operational Event-Trace Description product (OV-6c)— which depicts when activities are to occur within operational processes—includes a process entitled “Send Statements of Accountability or Transactions or Trial Balance to Treasury.” However, the Operational Activity Model (OV-5)—which shows the operational activities (or tasks) that are to occur and the input and output process flows among these activities— identifies no corresponding activity. Instead, the OV-5 has an activity entitled “Perform Treasury Operations,” which has four subactivities, none of which is linked to the above process. Program officials agreed that these were not linked; however, they stated that the “Perform Treasury Operations” activity and its subactivities are not intended to link with the above mentioned process. However, intended linkages are not clear because the architecture does not include a traceability matrix that shows the connection between the two architecture products (OV-6c and OV-5). Program officials have acknowledged the need for greater product integration. Second, one identified event in the architecture—“triggers the supplier process that provides supplier inventory information to the DOD”—is depicted as two separate events at different levels in the process decomposition. In particular, there are different names for this event on the parent diagrams and the child diagrams, and different templates were used to prepare the diagrams. Program officials agreed that these names differed and stated that this would be addressed. Third, certain business rules are not explicitly linked to the events included in the architecture description, such as “ENT Post Concurrent Months” and “ENT_Estimate_Receivable.” Program officials stated that the guidelines being used by the department require the business rules to be linked to process steps or decision gateway objects, not events. However, because an event is something that “happens” during the course of a business process, it affects the flow of the process and usually has a cause (trigger) or an effect (result). Therefore, best practices recognize the need to integrate or link the “triggers” that are reflected in the Operational Information Exchange Matrix (OV-3) to both the business rules shown in the Operational Rules Model (OV-6a) and the business events shown in the Operational Event-Trace Description (OV-6c). Program officials stated that they will consider revising their guidelines to link business rules to events. Fourth, the interface diagram for the Financial Management System Entity (FMSE) does not include 4 of the 21 relevant interfaces identified in the AV-2 product, which is the integrated dictionary. Instead, these four interfaces are shown in other system interface diagrams, which are not linked to the FMSE diagram. Program officials stated that they will address this. Fifth, the timelines reflected in the transition plan are difficult to map to the “To Be” description, according to DOD’s contractor responsible for verification and validation of the architecture and transition plan. Sixth, the architecture is not adequately linked to the component architectures and transition plans, although such linkage is particularly important given the department’s newly adopted federated approach to developing and implementing the architecture. According to DOD, a federated architecture is composed of a set of coherent but distinct entity architectures. The members of the federation collaborate to develop an integrated enterprise architecture that conforms to the enterprise view and to the overarching rules of the federation. Program officials agreed and stated that greater levels of integration will be a key goal of future versions of the architecture. Moreover, while Version 3.0 of the architecture is easier to navigate through than prior versions because of improved product integration, it is still difficult to navigate and use this version, making verification and validation of completeness and correctness unnecessarily time consuming. For example, to trace business rules to their associated events (e.g., the business rule entitled “ENT Post Concurrent Months” to the event “trial balance closing is complete”), we had to first locate and review the description of the business rule, then locate the descriptions of the events by manually searching through numerous process diagrams. This was necessary because the architecture does not include a systematic function that enables the user to list all business rules that are associated with events and all events that are associated with business rules. Such a function is an accepted verification and validation method recommended by industry experts. DOD and its verification and validation contractor have also identified limitations in Version 3.0 of the architecture, which program officials told us would be addressed in future versions. For example, the architecture does not do the following: It does not explicitly link to the department’s primary non-business enterprise architecture (the Global Information Grid Architecture, which covers the warfighting mission area). It does not adequately address “net-centricity,” a DOD term that refers to having a robust, globally interconnected network environment (including infrastructure, systems, processes, and people) in which data and services (e.g., security services) are shared “timely and seamlessly” among users, applications, and platforms. According to DOD, the architecture must be improved to better designate enterprise data sources, business services, and IT infrastructure services. It does not accurately and completely address stakeholder comments and their change requests. Program officials, including the Director of the Transformation Support Office, the Chief Architect, and the Enterprise Transition Plan Team Lead, stated that the department has taken an incremental approach to developing the business enterprise architecture and meeting the act’s requirements. Accordingly, the Special Assistant to the Deputy Under Secretary of Defense for Business Transformation and contractor officials said that Version 3.0 was appropriately scoped to provide for that content that could be produced in the time available to both lay the foundation for fully meeting the act’s requirements and provide a blueprint for delivering near-term capabilities and systems to meet near-term business enterprise priorities. Because of this, they stated that Version 3.0 fully satisfies the intent of the act. We support DOD taking an incremental approach to developing the business enterprise architecture, recognizing that adopting such an approach is a best practice that we have advocated. In addition, we believe that Version 3.0 provides a foundation upon which to build a more complete architecture. However, we do not agree that Version 3.0 fully satisfies the requirements in the fiscal year 2005 defense authorization act. Further, the missing scope and content and related shortcomings described above mean that while this version is a reasonable baseline upon which to build, it is not yet a sufficient frame of reference for defining a common vision and the kind of comprehensive transition plan needed to effectively and efficiently guide and constrain system investment decision making. The defense authorization act for fiscal year 2005 requires that DOD develop, by September 30, 2005, a transition plan for implementing its business enterprise architecture, and that this plan meet three requirements. The requirements are that it include an acquisition strategy for new systems that are expected to be needed to complete the defense business enterprise architecture; listings of the legacy systems that will and will not be part of the target business systems environment, and a strategy for making modifications to those systems that will be included; and specific time-phased milestones, performance metrics, and a statement of financial and nonfinancial resource needs. On September 28, 2005, the Acting Deputy Secretary of Defense approved the transition plan. This plan, as described below, partially satisfies the three requirements. With respect to the first requirement, concerning an acquisition strategy, the plan does describe a high-level approach for transforming the department’s business operations and systems, and the approach is driven by a set of priorities and a targeted set of business capabilities that are to be provided through the implementation of key programs. In general, the plan includes information (e.g., the lead core business mission, budget information, and milestones) for the 39 transformational initiatives and the 60 business systems that are to be part of the “To Be” architectural environment, including an acquisition strategy for each system. However, the plan is largely based on a bottom-up planning process in which ongoing programs were examined and categorized in the plan around business enterprise priorities and capabilities, including a determination as to which programs would be designated and managed as DOD-wide programs versus component programs. This bottom-up approach to developing the plan does not explicitly reflect transition planning key practices cited in federal guidance, such as consideration of technology opportunities, marketplace trends, fiscal and budgetary constraints, institutional system development and acquisition capabilities, and new and legacy system dependencies and life expectancies, and the projected value of competing investments. Moreover, it means that the plan is not based on a top-down capability gap analysis between the “As Is” and “To Be” architectures in which capability and performance shortfalls are described, and investments (such as transformation initiatives and systems) that are to address these shortfalls are clearly identified. For example, those programs and systems that need to be acquired, developed, or modified and by when to meet the department’s time frame to have a general ledger capability in fiscal year 2006 or 2007 are not clearly identified. According to DOD, this general ledger capability is to be addressed by systems and initiatives that are spread across various appendixes in the transition plan. However, the transition plan should clearly describe the collective investments, including the components and their respective systems, the specific strategies to be used, and the estimated timelines for completion, to address this capability shortfall. This is not yet the case because for example, the transition plan states that “each component is still identifying the optimal path to achieve the capability to post to a United States Standard General Ledger compliant DOD corporate ledger.” With respect to the second requirement, about identifying legacy systems that will and will not be part of the “To Be” architectural environment, including modifications to these systems, the plan does show some of the legacy systems that are to be replaced by ongoing programs. For example, it identifies the Defense Cash Accountability System (DCAS) as a target system and listed several legacy systems that would be replaced by DCAS (e.g., the Cash Reconciliation System, the Financial Operations Support system, and the International Balance of Payments system). It also provides a list of legacy systems that will be modified to provide capabilities associated with the target architecture environment, such as the Standard Procurement System and the Navy Marine Corps Intranet. However, the transition plan does not include a number of elements: It does not include a complete listing of the legacy systems that will not be part of the target architecture. For example, the plan identified 145 legacy systems that would be migrating to the target system Expeditionary Combat Support System (ECSS). However, DOD documentation shows that this system includes over 659 legacy logistics systems and other legacy management information systems. This means that the plan does not account for 514 systems related to the integration and migration of ECSS. Program officials agreed and stated that the 145 systems included account for 90 percent of the Air Force’s Installation and Logistics portfolio. They also said that the Air Force is currently assessing the remaining 514 systems to identify interfaces and to determine duplication, and will update the transition plan to reflect this assessment. The plan does not include system and budget information for 13 of its 15 defense agencies and for 8 of its 9 combatant commands. Exclusion of the Defense Information Systems Agency is particularly limiting, given that this agency provides IT infrastructure services that business systems will need to use. This omission makes it unclear whether the new business systems will be able to reuse existing components, thereby leveraging established capabilities, or will be allowed to introduce duplicative capabilities. According to program officials, the transition plan excluded information for 13 of the defense agencies and for 8 of its combatant commands because it was focused on the largest business-focused organizations in DOD—those meeting Tier 1 and Tier 2 investment review board certification criteria. They noted that the majority of these organizations do not meet these threshold criteria and therefore were not included in the transition plan. The plan does not include a complete listing of the legacy systems that will be part of the target architecture, nor explicit strategies for modifying those legacy systems identified in the plan’s system migration diagrams. For example, other DOD documentation shows that ECSS, the Defense Enterprise Accounting Management System, and the Defense Integrated Military Human Resources System (DIMHRS) must interface to provide needed business capabilities. However, the transition plan does not reflect this needed integration or the specific capabilities that will be provided by ECSS. According to the transition plan, these strategies are incorporated in the components’ architectures. However, as we stated in the previous section of this report, the components’ architectures have yet to be linked to the business enterprise architecture. Program officials stated that this issue will be addressed through the department’s tiered accountability approach. With respect to the third requirement, concerning milestones, performance metrics, and resource needs, the plan includes key milestone dates for the 60 systems identified. For example, September 2006 was given as the milestone date for the Defense Travel System to achieve full operational capability, and performance metrics were cited for some systems; for example, for DIMHRS, the plan cites a metric of reducing manual workarounds for military pay by 90 percent. However, the plan does not show specific dates for terminating or migrating many legacy systems, such as the Cash Reconciliation System and the Financial Operations Support system, and it does not include milestone dates for some ongoing programs, such as the Navy Tactical Command Support System. Further, the plan does not include benefits or measures and metrics focused on mission outcomes for each system that can be linked to the plan’s strategic goals. In addition, although the plan does identify resource needs in terms of funding, these needs are a reflection of the funding needs contained in the fiscal year 2006 budget submission; this submission was approved before the programs included in the transition plan were reevaluated by the DBMSC as to their fit within the “To Be” architectural environment and the reasonableness of their respective plans. According to program officials, this means that the resource needs in the transition plan for some programs are not current. Beyond the transition plan’s partial compliance with the three requirements in the act, as described above, the plan is also missing relevant context and is not consistent with the architecture in various ways. For example: The plan identifies 60 systems as target systems (e.g., DCAS), but the “To Be” architecture includes only 23 of these systems. Program officials agreed and stated that the other 37 systems are contained within component architectures and transition plans. However, as we previously stated, the component architectures have not been linked to Version 3.0. The plan identifies 21 enterprise initiatives (e.g., SFIS, Defense Acquisition Management Information Retrieval, and Customer Relationship Management), but only 1 of these—Defense Acquisition Management Information Retrieval—is included in the architecture, and it is shown in the architecture as a system, not an initiative. It is important for the architecture to include these initiatives and their relationships to systems. Program officials agreed and stated that Defense Acquisition Management Information Retrieval will be appropriately reflected as a system in the next version of the plan. The plan includes a list of 66 systems that are characterized as nonpriority DOD enterprise or component programs that will be part of the target architecture, but the target architecture does not identify all these systems. Further, some systems on the list, such as the Mechanization of Contract Administration Services (MOCAS), are systems that in the past were considered eligible for elimination. Program officials agreed and stated that some of these systems are component-level systems and therefore are reflected within the yet to be linked component architectures and transition plans. With regard to systems that, like MOCAS, are slated for termination, these officials stated that replacement systems for such legacy systems have not yet been identified. Until they are, the legacy systems will continue to be shown as target solutions. The specific business capabilities to be provided by the system solutions for the six business enterprise priorities have not been completely defined in the plan. For example, the Materiel Visibility business enterprise priority requires additional capabilities related to the supply chain planning process, according to DOD, but these capabilities have yet to be defined in the plan. Program officials stated that these will be addressed in future versions of the architecture and transition plan. According to program officials, including the Director of the Transformation Support Office, the Chief Architect, and the Enterprise Transition Plan Team Lead, the transition plan is evolving, and any limitations will be addressed in future iterations of the plan. The Special Assistant to the Deputy Under Secretary of Defense for Business Transformation and contractor officials stated that the department has taken an incremental approach to developing a transition plan and that the plan, as constrained by the scope of Version 3.0 of the architecture, satisfies the intent of the act’s requirements. We support an incremental approach to developing the transition plan, which is a best practice that we have advocated. However, the plan does not fully comply with the act’s requirements. Moreover, it was not derived on the basis of a gap analysis between “As Is” and “To Be” architectures, and it is not of sufficient scope, content, and alignment to effectively and efficiently manage the disposition of the department’s existing inventory of systems or for sequencing the introduction of modernized business operations and supporting systems. The fiscal year 2005 defense authorization act specifies information that the department is to incorporate in its budget request for fiscal year 2006 and each fiscal year thereafter. Specifically, the act states that each budget request must include information on each defense business system for which funding is being requested; all funds, by appropriation, for each such business system, including funds by appropriation specifically for current services (Operation and Maintenance) and systems modernization; and the designated approval authority for each business system. DOD’s fiscal year 2006 IT budget submission partially satisfies these three requirements. With regard to the first requirement, to identify each business system for which funding is requested, the fiscal year 2006 budget does not reflect all business systems. Specifically, when DOD submitted its fiscal year 2006 budget submission in February 2005, it did not yet have a comprehensive single inventory of its business systems. As we reported in May 2004, DOD was relying at that time on several separate, inconsistent, and unreconciled databases to establish an inventory of its business and national security systems. Accordingly, we recommended that the department establish a single database for its inventory of business systems. On July 13, 2004, the Assistant Secretary of Defense (Networks and Information Integration)/Chief Information Officer (ASD(NII)/CIO) directed establishment of the DOD Information Technology Portfolio Data Repository (DITPR), and on September 28, 2005, the Deputy Assistant Secretary of Defense (Deputy CIO), issued guidance to begin merging the DOD IT registry into DITPR. According to DOD, all business systems will be entered into DITPR by December 31, 2005. According to DOD, all systems will be entered into DITPR by September 30, 2006. However, the establishment and merger of these repositories had not been completed before the development and submission of the fiscal year 2006 IT budget. With respect to the fiscal year 2007 and future IT budget submissions, DOD plans to use a separate database, entitled the Select and Native Programming Data Collection System–Information Technology to develop the department’s IT budget submissions. For these future submissions, it will be important for DOD to ensure that this system contains all business systems investments. The extent to which any of these repositories include all business systems, and thus the extent to which the fiscal year 2006 and future budget submissions will as well, is also a function of whether DOD classifies a given system as a business system or a national security system. We previously reported that DOD reclassified 56 systems in its fiscal year 2005 budget request from business systems to national security systems. The net effect of the reclassification was a decrease of approximately $6 billion in the fiscal year 2005 budget request for business systems and related infrastructure. While some of the reclassifications appeared reasonable, we reported that others were questionable. According to DOD, it is currently reviewing the 56 systems, and it plans to complete these reviews by February 2006 to ensure they are properly classified in the fiscal year 2007 IT budget submission. Further reclassifications are in the fiscal year 2006 budget submission. Specifically, 13 systems have been reclassified from business systems to national security systems in the fiscal year 2006 submission. In addition, 10 national security systems have been reclassified as business systems in the fiscal year 2006 submission. For example: The Air Force’s Aviation Resource Management System, with a fiscal year 2006 budget of $3.3 million, was reclassified from a business to a national security system. DOD included this system in the department’s original inventory of business systems in April 2003 and also reported it as a business system under the Logistics domain in the fiscal year 2005 IT budget request. The TRICARE Management Agency’s Medical Readiness Decision Support System, with a fiscal year 2006 budget of $1.3 million, was reclassified from a national security system to a business system. Identification of each business system is also complicated by the fact that DOD’s definition of a business system, as given in its budget submission, differs from the definition of a business system in the fiscal year 2005 defense authorization act. According to the act, a defense business system is “an information system, other than a national security system, operated by, for, or on behalf of the Department of Defense, including financial systems, mixed systems, financial data feeder systems, and information technology and information assurance infrastructure, used to support business activities.” In contrast, the definition that DOD used as the basis for its fiscal year 2006 IT budget request notes that IT infrastructure and information assurance funding supports both business systems and national security systems. As a result, DOD’s position is that shared IT infrastructure and information assurance funding cannot be classified as related to business systems or to national security systems. With regard to the second requirement, to identify the type of funding (i.e., appropriation) being requested and whether the funding was for current services or modernization, the fiscal year 2006 budget submission identifies the type of funding (i.e., appropriation) being requested and whether the funding was for current services or modernization. However, a number of systems are assigned to a category designated “All Other.” It is not clear what is included in the budget submission under this category. In the fiscal year 2006 IT budget submission, this category totaled about $1.2 billion, and includes, for example, about $22.6 million for financial management. As we previously reported, the ASD(NII)/CIO and military services’ budget officials told us that the “All Other” category in the IT budget includes system projects that do not have to be identified by name because they fall below the $2 million reporting threshold for budgetary purposes. This budgetary threshold is not consistent with the $1 million threshold that the act requires for modernization review and approval, as discussed later in this report, and thus could affect DOD’s ability to identify all system investments that are subject to the requirements of the act. According to ASD(NII)/CIO officials, the fiscal year 2007 budget submission will identify all business systems for which planned spending is equal to or greater than $1 million. With respect to the third requirement, to identify the designated approval authority for each system, the fiscal year 2006 IT budget submission does so for most systems. However, the approval authority was not identified for 57 business systems. For example, the Navy’s C2 On-the-Move Network Digital Over-the-Horizon Relay system and the Defense Commissary Agency’s Enterprise Business System had a designated approval authority of “Other.” DOD officials told us that the department recognizes the need to improve the accuracy of its budget submission to provide better information to both DOD management and the Congress on the department’s business systems. Full compliance with the act’s requirements relative to budgetary disclosure is an important enabler of informed DOD budgetary decision making and congressional oversight. Lacking such disclosure, whether due to incomplete system repositories or incorrect system classification, hinders the department’s efforts to improve its control and accountability over its business systems investments and constrains the Congress’s ability to effectively monitor and oversee the billions of dollars spent annually to maintain, operate, and modernize the department’s business systems environment. The defense authorization act for fiscal year 2005 directs DOD to put in place a specifically defined structure that is responsible and accountable for controlling business system investments to ensure compliance and consistency with the business enterprise architecture. More specifically, the act directs the Secretary of Defense to delegate responsibility for review, approval, and oversight of the planning, design, acquisition, deployment, operation, maintenance, and modernization of defense business systems to designated approval authorities or “owners” of certain business missions. These are as follows: The Under Secretary of Defense for Acquisition, Technology, and Logistics is to be responsible and accountable for any defense business system the primary purpose of which is to support acquisition, logistics, or installations and environment activities. The Under Secretary of Defense (Comptroller) is to be responsible and accountable for any defense business system the primary purpose of which is to support financial management activities or strategic planning and budgeting. The Under Secretary of Defense for Personnel and Readiness is to be responsible and accountable for any defense business system the primary purpose of which is to support human resource management activities. The Assistant Secretary of Defense for Networks and Information Integration/Chief Information Officer of the Department of Defense is to be responsible and accountable for any defense business system the primary purpose of which is to support information technology infrastructure or information assurance activities. The Deputy Secretary of Defense or an Under Secretary of Defense, as designated by the Secretary of Defense is to be responsible for any defense business system to support any DOD activity not covered above. DOD has satisfied this requirement under the act. On March 19, 2005, the Deputy Secretary of Defense issued a memorandum that delegated the authority in accordance with the criteria specified in the act, as described above. Our research and evaluations, as reflected in the guidance that we have issued, show that clear assignment of senior executive investment management responsibilities and accountabilities is crucial to having an effective institutional approach to IT investment management. The defense authorization act for fiscal year 2005 also required DOD to establish investment review structures and processes, including a hierarchy of investment review boards, each with representation from across the department, and a standard set of investment review and decision-making criteria for these boards to use to ensure compliance and consistency with the business enterprise architecture. In this regard, the act cites three specific requirements. First, it requires the establishment of the DBSMC for overseeing DOD’s business systems modernization efforts, and it specifically identifies the DOD positions to chair and be members of this committee. Second, it requires each designated approval authority to establish by March 15, 2005, an investment review board for investments falling under that authority’s responsibility. Third, the act requires establishment of an investment review process that includes, among other things, the use of common decision criteria, threshold criteria to ensure appropriate levels of review and accountability, and at least annual reviews of every business system investment. DOD has partially satisfied this requirement in the act. Among other things, it has done the following. In February 2005, DOD chartered the DBSMC, identifying it as the highest ranking governance body responsible for overseeing business systems modernization efforts. The DBSMC is responsible for ensuring that DOD improves its management and oversight of the department’s business systems. Consistent with the act, the DBSMC is chaired by the Deputy Secretary of Defense, and its members include those positions specified in the act: namely, the designated approval authorities previously discussed, the secretaries of the military services, and the heads of the defense agencies. The vice-chair of the committee is the Under Secretary of Defense for Acquisition, Technology, and Logistics. DOD established four investment review boards to improve the control and accountability over business system investments. The four are (1) Financial Management, (2) Human Resources Management, (3) Real Property and Installations Lifecycle Management, and (4) Weapon Systems Lifecycle Management and Materiel Supply and Services Management. Each is chaired by the appropriate approval and certification authority (see previous section) and has DOD-wide representation, including membership from the combatant commands, military services, defense agencies, and the Joint Chiefs of Staff. On June 2, 2005, the Under Secretary of Defense for Acquisition, Technology, and Logistics issued guidance entitled the Investment Review Process Overview and Concept of Operations for Investment Review Boards. This guidance integrates the policies, specifies responsibilities, and identifies the processes to govern the establishment and operation of investment review boards. Among other things, the guidance provides for these boards to review all business system investments, at least annually, and certify defense business system modernizations costing over $1 million, as required by the act. The guidance also specifies the certification process, including criteria to be used. On July 15, 2005, the Under Secretary of Defense for Acquisition, Technology, and Logistics issued supplemental guidance and criteria for the components (military services, defense agencies, and DOD field activities) to use in preparing their respective defense business system modernization submissions to the investment review boards. Overall, DOD’s investment structures and processes employ a concept that it refers to as “tiered accountability.” According to the department, tiered accountability is intended to place more responsibility for the management and oversight of business systems investments with the military services and defense agencies’ leaderships. Accordingly, DOD’s guidance describe a process in which business systems investments must be certified by multiple levels of approval and certification authorities, including the component program manager, the component- level precertification authority, the investment review board certification authority, and the DBSMC. As part of this process, a certification package for each system investment must be submitted to the approval authority, and this package is to include basic system information (e.g., system description and funding), justification as to how the system addresses enterprise-level or component-specific requirements; and analysis demonstrating compliance with the business enterprise architecture. A standard system certification template has been developed for use by all components and decision authorities. The act designates the ASD(NII)/CIO as one of five designated approval authorities for which an investment review board is to be established. According to the act and the Deputy Secretary’s March 19, 2005, memorandum, the ASD(NII)/CIO is responsible and accountable for any business system the primary purpose of which is to support IT infrastructure or information assurance activities. However, the ASD(NII)/CIO has not established an investment review board. According to DOD officials, a separate investment review board has not been established because the ASD(NII)/CIO does not consider the IT infrastructure, information assurance, and related activities that are under its purview to be business systems. They added that the ASD(NII)/CIO is represented on the other investment review boards and can thus oversee issues related to infrastructure and information assurance at those meetings. DOD’s not having established this investment review board is one of the reasons that the department’s satisfaction of this requirement in the act is as yet only partial. In addition, a key aspect of the act and DOD’s tiered accountability approach is the effective implementation of the defined structures and processes. It is important that such implementation occurs in a continuous and consistent fashion across the department, as we have previously stated. If it does not, the result could be investment decisions that perpetuate the existence of overly complex, error-prone, nonintegrated system environments and limit introduction of corporate solutions to long-standing business problems. The defense authorization act for fiscal year 2005 specifies two basic requirements, effective October 1, 2005, for obligation of funds for business system investments costing more than $1 million. First, it requires that these investments be certified by a designated “approval authority” as meeting specific criteria. Second, it requires that the DBSMC approve each certification. The act also states that failure to do so before the obligation of funds constitutes a violation of the Anti-Deficiency Act. The department has taken a number of actions to comply with these two requirements. As mentioned in the previous section, the department has established an investment review process, and this process requires, among other things, that any defense business system modernization costing more than $1 million obtain component precertification, investment review board approval, approval authority certification, and DBSMC approval. This process, as described in investment review board guidance (including DOD Business Systems Investment Review Proposal Submission Guideline), defines the information that programs are to submit to obtain certification for systems meeting certain thresholds, referred to as tiers. Further, the process states that the component’s precertification authority must certify that the system is not a duplicative effort and that it is compliant with the DOD business enterprise architecture before sending the system’s certification package forward to an investment review board. The department has identified 210 business system modernizations that meet this $1 million threshold and thus need to be approved by the DBSMC. Of the 210, 166 were approved by the DBSMC before September 30, 2005. The remaining 44 have yet to be approved. This means that under the law, DOD cannot obligate fiscal year 2006 funds for these 44 systems until they receive DBSMC approval. It is important to note, however, that the department can continue to invest in these systems by using funds that are still available from previous fiscal years. Just as with the identification of business systems in DOD’s IT budget submissions (discussed earlier), the extent to which DOD ultimately complies with the act with regard to obligations costing more than $1 million depends, in part, on the proper classification of systems as business versus national security. The following example illustrates this point. In its fiscal year 2006 budget, the department is requesting about $167 million for the modernization of the Army’s Global Combat Support System. The system, as we previously reported, was reclassified as a national security system in the fiscal year 2005 budget, even though it was included in the department’s reported inventory of about 4,200 business systems and approved by the DOD Comptroller in January 2004. Also, the DBSMC approved this Army system in September 2005, even though the system remains listed in the fiscal year 2006 IT budget request as a national security system. In contrast, the department is requesting about $31 million for the modernization of the Air Force’s version of this system (Global Combat Support System-Air Force) in its fiscal year 2006 budget. However, this system is not listed as one of the 210 systems requiring DBSMC approval, even though the system was reclassified as a business system in the fiscal year 2006 budget. Another issue that will affect the degree to which the department complies with the act is whether it relies on system certifications and approvals that preceded the act’s requirements. According to financial management investment review board officials, not all of the financial management systems were reviewed in accordance with the fiscal year 2005 act’s requirements. More specifically, four business systems that had already been reviewed in accordance with the criteria specified in the defense authorization act for fiscal year 2003 were granted DBSMC approval in August 2005 on the basis of this prior approval. Table 6 shows the specific systems, fiscal year 2006 modernization funding, and the date of the previous approval. However, the act does not provide for DBSMC approval based upon the previous review of a system. The act is specific in terms of what constitutes DBSMC review and approval, and these criteria were not followed for the above four systems. According to financial management investment review board officials, the systems listed in table 6 will go through the current investment review process no later than February 2006. The department’s actions to review and approve business systems investments can be viewed as work in process. According to DOD, it intends to perform the requisite reviews and approvals of all applicable systems before it obligates fiscal year 2006 funds. If it does, it will have complied with the act. The defense authorization act for fiscal year 2005 contains provisions aimed at strengthening DOD’s institutional approach to investing in IT business systems. To varying degrees, the department has satisfied six specific requirements in the act, and thus has made important progress in establishing the kind of fundamental management structures and processes that are needed to correct the long-standing and pervasive IT management weaknesses that have led to our designation of DOD business systems modernization as a high-risk program. This progress provides a foundation upon which to build. However, much more remains to be accomplished to fully satisfy the act and address the department’s IT management weaknesses, particularly with regard to sufficiently developing the enterprise architecture and transition plan and ensuring that investment review and approval processes are institutionally implemented. The road map for fully addressing these areas is embedded in our prior recommendations to the department. Therefore, we are not making additional recommendations at this time. In its written comments on a draft of this report, signed by the Deputy Under Secretary of Defense (Business Transformation) and reprinted in appendix II, the department recognized that our analysis, recommendations, guidance, and educational activities have made us a constructive player in DOD’s business transformation efforts. While not commenting on most of the findings in the report, the department also stated that it disagreed with us on two points—the level of development of an “As Is” architecture and consistency within and between the business enterprise architecture and the transition plan. With respect to the first point, DOD stated that the sheer size and scope of its business operations makes development of a comprehensive “As Is” architecture an ineffective use of time and resources. Instead, according to DOD, while it understood that there needs to be an “easily traceable direct link” between the results of examining its “As Is” conditions and the “To Be” solutions, it maintained that the results of this “As Is” examination are not required to be in the enterprise architecture itself. According to DOD, such “As Is” related work “is more properly aligned with business process review than architecture management.” Notwithstanding these comments, DOD also stated that it was committed to documenting the “As Is” and “To Be” relationship in an appropriate manner. We agree that both the “As Is” and the “To Be” architectures need to be documented in an appropriate manner. To date, DOD has yet to document its “As Is” architecture in a manner consistent with best practices and federal guidance, and thus we stand by our previous recommendations concerning development of an “As Is” architecture, and we look forward to DOD fulfilling the commitment it made in its comments to address this void in its business enterprise architecture. In this regard, we also agree that developing what the department termed in its comments as a “comprehensive ‘As Is’ architecture” may not be an effective use of time and resources. Accordingly, our prior recommendations for an “As Is” architecture have neither presumed nor prescribed a specific level of comprehensiveness for this “As Is” description, beyond recognizing that it should be defined in accordance with widely accepted best practices and federal guidance. According to these practices and guidance, it should capture the current inventory of enterprise capabilities (in terms of business processes and performance measures) in sufficient scope and detail to permit meaningful analysis of capability gaps in the “To Be” architecture in those areas of the enterprise that are likely to change during the defined transition period. In addition, it should capture descriptions of the information/data, services/applications, and technology environments currently in use, so that transition planning activities can appropriately take into account and address such things as data redundancies, application duplication, shared services, and infrastructure capacity. Our prior recommendations were, however, clear that these “As Is” descriptions should be part of the enterprise architecture (as opposed to what DOD referred to as a business process review), because including such descriptions is a widely accepted best practice and a condition in federal guidance. With respect to the second point, DOD stated that great effort was made to integrate the business enterprise architecture and the transition plan and that “virtually all” of our examples demonstrating a lack of integration within and between the business enterprise architecture and the transition plan “would be more accurately described as misunderstandings regarding the scope, purpose or intent of the information presented.” It also stated that it was committed to correcting any integration issues. We agree that considerable effort was made to integrate architecture products and the architecture with the transition plan, and we acknowledge this in the report by stating that the integration of products in this version of the architecture was an improvement over prior versions. However, because our “misunderstandings” arise directly from ambiguity and inconsistencies in the architecture products and the transition plan that blur their intended meaning, this is clear evidence that a well-defined architecture is needed and that current levels of ambiguity and inconsistency limit the utility and effectiveness of the products as reference tools for guiding and constraining system investment decisions. We agree with DOD that addressing these limitations will create better transformation tools that will benefit all stakeholders, most importantly those within the department. We are sending copies of this report to interested congressional committees; the Director, Office of Management and Budget; the Secretary of Defense; the Deputy Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology, and Logistics; the Under Secretary of Defense (Comptroller); the Assistant Secretary of Defense (Networks and Information Integration)/Chief Information Officer; the Under Secretary of Defense (Personnel and Readiness); and the Director, Defense Finance and Accounting Service. This report will also be available at no charge on our Web site at http://www.gao.gov. If you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-3439 or hiter@gao.gov, or McCoy Williams at (202) 512-6906 or williamsM1@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Our objective was to assess the Department of Defense’s (DOD) efforts to comply with the requirements of the defense authorization act for fiscal year 2005. Consistent with the act and as agreed with congressional defense committees’ staffs, we evaluated DOD’s efforts relative to six provisions in the act: (1) development of an enterprise architecture that includes an information infrastructure enabling DOD to support specific capabilities, such as data standards and system interface requirements; (2) development of a transition plan for implementing the enterprise architecture that includes specific elements, such as the acquisition strategy for new systems; (3) inclusion of business system information in DOD’s fiscal year 2006 budget submission; (4) establishment of a business system investment approval and accountability structure; (5) establishment of a business system investment review process; and (6) approval of defense business system investments in excess of $1 million. To determine whether the architecture addressed the requirements specified in the act, we reviewed Version 3.0 of the business enterprise architecture, which was approved on September 28, 2005. This review included analyzing relevant criteria to identify the important architecture scope and content and comparing Version 3.0 architecture products to determine whether they provided this scope and content. In reviewing the products, we specifically focused on principles, business processes, business rules, and standards (e.g., process and data) because relevant criteria recognize that these are fundamental elements of a well-defined and enforceable architecture. In addition, we focused on consistency and completeness among the architecture products and their content (e.g., operational activities and functions to systems), as well as between the architecture and the transition plan. To do this, we traced linkages between the different architecture products to determine if these linkages had been specifically identified to ensure ease of stakeholder navigation and understanding. We also reviewed the traceability matrix prepared by DOD that documented the mapping of the architecture products to the act and interviewed program officials to obtain an understanding of the methodology used to prepare and validate the information in this matrix. In addition, we interviewed key program officials, including the Special Assistant to Business Transformation, Deputy Under Secretary of Defense (Financial Management), the Director of the Transformation Support Office, the Chief Architect, and the Enterprise Transition Plan Team Lead, to discuss the development and maintenance of the architecture products. To determine whether the transition plan addresses the requirements specified in the act, we reviewed the transition plan approved on September 28, 2005. This review included determining whether the transition plan included elements specified in the act, such as an acquisition strategy for new systems and a statement of financial and nonfinancial resource needs. We also reviewed the transition plan to ascertain the relationship between the plan and the architecture. We reviewed the traceability matrix prepared by DOD that documented the mapping of the transition plan elements to the act and interviewed program officials to obtain an understanding of the methodology used to prepare and validate the information in this matrix. In addition, we interviewed key program officials, including the Special Assistant to Business Transformation, the Deputy Under Secretary of Defense (Financial Management), the Director of the Transformation Support Office, the Enterprise Transition Plan Team Lead, and the Chief Architect, to discuss the development and maintenance of the plan. To determine whether DOD’s fiscal year 2006 information technology (IT) budget submission was prepared in accordance with the criteria set forth in the act, we reviewed and analyzed DOD’s approximately $30 billion fiscal year 2006 IT budget request. As part of our analysis, we determined what portion of the IT budget request related to DOD business systems. In addition, we compared the fiscal year 2005 and 2006 IT budget requests to determine the systems that were reclassified from business to national security systems, as well as from national security to business systems. We analyzed the 23 system reclassifications by using information in the IT budget requests and the department’s business system inventory. We also followed up with DOD officials to ascertain the department’s efforts to address our concerns regarding the reclassification of the 56 systems discussed in our April 2005 report. We also reviewed and analyzed the fiscal year 2006 IT budget request to ascertain whether the specific types of funds being requested were explicitly identified and whether an approval authority was designated for each business system. To determine whether DOD has put in place a specifically defined structure that is responsible and accountable for controlling business systems investments to ensure compliance and consistency with the business enterprise architecture, we reviewed applicable memorandums that had been issued by the department and interviewed cognizant departmental officials. To determine whether DOD has established investment review structures and processes and issued a standard set of investment review and decision- making criteria, we reviewed applicable policies and procedures issued by the department. In this regard, we reviewed the charter for each of the investment review boards. We also met with representatives from the Financial Management and the Weapon Systems Lifecycle Management and Materiel Supply and Services Management investment review boards to obtain an understanding of the specific roles and responsibilities of the investment review boards. We also obtained an understanding of the tiered accountability approach being followed by the department to help improve its control over business system investments. We also reviewed the department’s May 17, 2005, document entitled “Investment Review Process Overview and Concept of Operations for Investment Review Boards.” To determine whether the department had established a process for the review of business system modernizations in excess of $1 million, we determined whether the department had identified the business systems that were subject to the $1 million threshold. For the 210 systems that the department identified as subject to the criteria set forth in the act, we reviewed the department’s July 2005 guidance entitled “DOD Business Systems Investment Review Proposal Submission Guideline.” In addition, we met representatives from the Financial Management and Weapon Systems Lifecycle Management and Materiel Supply and Services Management investment review boards to obtain an understanding of how they used the guidance in the review of the systems that they are accountable for. We did not independently validate the reliability of the cost and budget figures provided by DOD, because the specific amounts were not relevant to our findings. We conducted our work at DOD headquarters offices in Arlington, Virginia, from August through November 2005 in accordance with U.S. generally accepted government auditing standards. Various enterprise architecture frameworks are available for organizations to follow. Although these frameworks differ in their nomenclatures and modeling approaches, they consistently provide for defining an enterprise’s operations in both (1) logical terms, such as interrelated business processes and business rules, information needs and flows, and work locations and users, and (2) technical terms, such as hardware, software, data, communications, and security attributes and performance standards. The frameworks also provide for defining these perspectives for both the enterprise’s current, or “As Is,” environment and its target, or “To Be,” environment, as well as a transition plan for moving from the “As Is” to the “To Be” environment. For example, John A. Zachman developed a structure or framework for defining and capturing an architecture. This framework provides for six windows from which to view the enterprise, which Zachman terms “perspectives” on how a given entity operates: those of (1) the strategic planner, (2) the system user, (3) the system designer, (4) the system developer, (5) the subcontractor, and (6) the system itself. Zachman also proposed six models that are associated with each of these perspectives; these models describe (1) how the entity operates, (2) what the entity uses to operate, (3) where the entity operates, (4) who operates the entity, (5) when entity operations occur, and (6) why the entity operates. Zachman’s framework provides a conceptual schema that can be used to identify and describe an entity’s existing and planned components and their relationships to one another before beginning the costly and time- consuming efforts associated with developing or transforming the entity. Since Zachman introduced his framework, a number of other frameworks has been proposed. In August 2003, the department released Version 1.0 of the DOD Architecture Framework (DODAF). The DODAF defines the type and content of the architectural products, as well as the relationships among the products that are needed to produce a useful architecture. (See app. IV for a list of the products prescribed by the DODAF.) Briefly, the framework decomposes an architecture into three primary views: operational, systems, and technical standards (see fig. 1). According to DOD, the three interdependent views are needed to ensure that IT systems support operational needs, and that they are developed and implemented in an interoperable and cost-effective manner. In September 1999, the federal Chief Information Officer (CIO) Council published the Federal Enterprise Architecture Framework (FEAF), which is intended to provide federal agencies with a common construct on which to base their respective architectures and to facilitate the coordination of common business processes, technology insertion, information flows, and system investments among federal agencies. FEAF describes an approach, including models and definitions, for developing and documenting architecture descriptions for multiorganizational functional segments of the federal government. Similar to most frameworks, FEAF’s proposed models describe an entity’s business, the data necessary to conduct the business, applications to manage the data, and technology to support the applications. In addition, the Office of Management and Budget (OMB) established the Federal Enterprise Architecture (FEA) Program Management Office to develop a federated enterprise architecture in the context of five “reference models, and a security and privacy profile that overlays the five models.” The Business Reference Model is intended to describe the federal government’s businesses, independent of the agencies that perform them. This model consists of four business areas: (1) services for citizens, (2) mode of delivery, (3) support delivery of services, and (4) management of government resources. It serves as the foundation for the FEA. OMB expects agencies to use this model, as part of their capital planning and investment control processes, to help identify opportunities to consolidate IT investments across the federal government. Version 2.0 of this model was released in June 2003. The Performance Reference Model is intended to describe a set of performance measures for major IT initiatives and their contribution to program performance. According to OMB, this model will help agencies produce enhanced performance information; improve the alignment and better articulate the contribution of inputs, such as technology, to outputs and outcomes; and identify improvement opportunities that span traditional organizational boundaries. Version 1.0 of this model was released in September 2003. The Service Component Reference Model is intended to identify and classify IT service (i.e., application) components that support federal agencies and promote the reuse of components across agencies. This model is intended to provide the foundation for the reuse of applications, application capabilities, components (defined as “a self- contained business process or service with predetermined functionality that may be exposed through a business or technology interface”), and business services. According to OMB, this model is a business-driven, functional framework that classifies service components with respect to how they support business or performance objectives. Version 1.0 of this model was released in June 2003. The Data Reference Model is intended to describe, at an aggregate level, the types of data and information that support program and business line operations and the relationships among these types. This model is intended to help describe the types of interactions and information exchanges that occur across the federal government. Version 1.0 of this model was released in September 2004. The Technical Reference Model is intended to describe the standards, specifications, and technologies that collectively support the secure delivery, exchange, and construction of service components. Version 1.1 of this model was released in August 2003. The Security and Privacy Profile is intended to provide guidance on designing and deploying measures that ensure the protection of information resources. OMB has released Version 1.0 of the profile. Executive-level summary information on the scope, purpose, and context of the architecture Architecture data repository with definitions of all terms used in all products High-Level Operational Concept Graphic High-level graphical/textual description of what the architecture is supposed to do, and how it is supposed to do it Graphic depiction of the operational nodes (or organizations) with needlines that indicate a need to exchange information Information exchanged between nodes and the relevant attributes of that exchange Command structure or relationships among human roles, organizations, or organization types that are the key players in an architecture Operations that are normally conducted in the course of achieving a mission or a business goal, such as capabilities, operational activities (or tasks), input and output flows between activities, and input and output flows to/from activities that are outside the scope of the architecture One of three products used to describe operational activity—identifies business rules that constrain operations One of three products used to describe operational activity—identifies business process responses to events One of three products used to describe operational activity—traces actions in a scenario or sequence of events Documentation of the system data requirements and structural business process rules of the operational view Identification of systems nodes, systems, and systems items and their interconnections, within and between nodes Specific communications links or communications networks and the details of their configurations through which systems interface Relationships among systems in a given architecture; can be designed to show relationships of interest (e.g., system-type interfaces, planned versus existing interfaces) Mapping of relationships between the set of operational activities and the set of system functions applicable to that architecture Characteristics of the system data exchanged between systems Systems Performance Parameters Matrix Quantitative characteristics of systems and systems hardware/software items, their interfaces, and their functions Planned incremental steps toward migrating a suite of systems to a more efficient suite, or toward evolving a current system to a future implementation Emerging technologies and software/hardware products that are expected to be available in a given set of time frames and that will affect future development of the architecture One of three products used to describe system functionality—identifies constraints that are imposed on systems functionality due to some aspect of systems design or implementation One of three products used to describe system functionality—identifies responses of a system to events One of three products used to describe system functionality—lays out the sequence of system data exchanges that occur between systems (external and internal), system functions, or human role for a given scenario Physical implementation of the Logical Data Model entities (e.g., message formats, file structures, and physical schema) In addition to the contacts named above, key contributors to this report were Cynthia Jackson and Darby Smith, Assistant Directors, and Beatrice Alff, Barbara Collier, Francine DelVecchio, Neelaxi Lakhmani, Anh Le, Mai Nguyen, Tarunkant Mithani, Freda Paintsil, Randolph Tekeley, and William Wadsworth.
For many years, the Department of Defense (DOD) has been attempting to modernize its business systems, and GAO has made numerous recommendations to help it do so. To further assist DOD, Congress included provisions in the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 aimed at ensuring that DOD develop a well-defined business enterprise architecture and transition plan by September 30, 2005, as well as establish and implement effective structures and processes for managing information technology (IT) business system investments. In response to the act's mandate, GAO is reporting on DOD's compliance with requirements relating to DOD's architecture, transition plan, budgetary disclosure, and business system review and approval structures and processes. Given GAO's existing recommendations, it is not making additional recommendations at this time. In comments on a draft of this report, DOD recognized that GAO has been a constructive player in its business transformation efforts. While not specifically commenting on most of the report's findings and its conclusions, DOD also said that it disagreed with two points: the level of development for its "As Is" architecture and instances of nonintegration within the architecture and transition plan. However, it also commented that it is committed to addressing what GAO views to be the underlying basis of both points. In its efforts to comply with the act's provisions, DOD has made important progress in establishing needed modernization management capabilities. However, much more remains to be done. The latest version of the business enterprise architecture (Version 3.0), which the department approved on September 28, 2005, partially satisfies the conditions of the act, but not entirely. For example, while Version 3.0 includes a target or "To Be" architecture, as required, it does not include a current ("As Is") architecture. Without this element, DOD could not analyze the gaps between the two architectures--critical input to a comprehensive transition plan. However, this version of the architecture represents significant progress and provides a foundation upon which the department can build. The transition plan associated with the current version of the architecture partially satisfies the act, but improvements are needed. Specifically, although it includes certain required information (such as milestones for major projects), it is inconsistent with the architecture in various ways. For instance, it identifies target systems (those that are to be included in the "To Be" architecture), but these are not always the same as those identified in the architecture itself. In addition, the transition plan does not include system performance metrics aligned with the plan's strategic goals and objectives. The department's fiscal year 2006 budget discloses some but not all required information. For example, it does not identify the approval authority for all business systems investments. DOD has satisfied some of the act's requirements regarding its business systems investments, but it either has not satisfied or is still in the process of satisfying others. For example, the department has fulfilled the act's requirement for delegating IT system responsibility and accountability to designated approval authorities as specified. In addition, DOD has largely satisfied the act's requirement to establish certain structures and define certain processes to review and approve IT investments. However, some of these structures are not yet in place, and some reviews and approvals to date have not followed the criteria in the act. DOD agrees that additional work is required and states that under its incremental approach to developing the architecture and transition plan, and under its tiered accountability structure for reviewing and approving business system investments, improvements will occur in its architecture, transition plan, budgetary disclosure, and investment management and oversight. If these improvements do not occur, DOD's business systems modernization will continue to be a high-risk program.
The military services face the challenge of dealing with a large backlog of facilities maintenance and repair and insufficient funding devoted to sustainment, restoration and modernization. To address this issue, DOD is pursuing an installation strategy to reduce infrastructure and base operating costs and reshape military installations to meet the needs of the 21st century. After the Cold War, military force structure was reduced by 36 percent. Consequently, the Department was left with infrastructure it no longer needed for current military operations. To address this imbalance, the Department has undergone four rounds of base realignment and closures that have reduced its infrastructure holdings by about 21 percent. Even after the four rounds of base realignment and closures, the Department estimates that 20 to 25 percent of its infrastructure is not needed to meet current mission requirements. Meanwhile, service budgets frequently have been insufficient to address facility needs. In December 2001, Congress passed the National Defense Authorization Act for Fiscal Year 2002 giving the Department the authority for another round of base realignment and closure in 2005. The Department estimates it will save approximately $3 billion annually following these actions. Although the Department views the base realignment and closure process as having the greatest impact in terms of savings, it is only one initiative in a multi-part strategy to reshape and make the services’ installations more efficient. Other important initiatives include, but are not limited to, housing and utility privatization, competitive sourcing of non-inherently governmental functions, demolition, and leasing of real property and facilities. DOD’s leasing authority can be traced back to the Act of July 28, 1892. The act provided general authority for the Secretary of War to enter into leases for a maximum of 5 years for property that was “not for the time required for public use.” The Navy received similar authority under a separate law in 1916. Neither statute permitted the services to retain cash proceeds or accept non-cash or “in-kind” consideration. Additionally, the Miscellaneous Receipts Act required all cash payments to be deposited in the Treasury. Congress expanded the Department’s leasing authority in 1947. The expansion permitted the service secretaries to enter into leases for longer periods, grant the lessee a first right to buy the property in case of sale, and accept in-kind consideration. The expansion also provided that in-kind consideration could be applied specifically to the leased property or to the entire installation, if a substantial part of the installation was leased. Congress also provided limited relief from the Miscellaneous Receipts Act by permitting the services to be reimbursed for the costs of utilities or services provided in connection with a lease. The basic authority remained relatively unchanged until 1990, when Congress amended 10 U.S.C. 2667 to establish special accounts for cash payments. The amendment required the services to use the accounts for environmental restoration or facilities maintenance and repair. The amendment provided that, to the extent provided in appropriation acts, half of the proceeds were to be returned to the installation where the property was located and the other half was to be available for use by the services. The services had the option of allocating some or all of a service’s half of the cash proceeds to the installation leasing the property or retaining it for any property owned by the service. Even with these amendments to 10 U.S.C. 2667, the Department believed that further revisions were needed to make the statute a better tool for utilizing its property. Section 2814 of the Strom Thurmond National Defense Authorization Act for Fiscal Year 1999 required the Department to provide Congress with an assessment of its authority to lease real property and proposed adjustments to 10 U.S.C. 2667. In its report, the Department proposed four changes that would have allowed the Department, in its view, to use its surplus capacity more effectively to further reduce installation support costs. The proposed changes included (1) allowing the use of cash proceeds without the additional step of congressional appropriation, (2) permitting environmental indemnification, (3) expanding the use of in-kind consideration, and (4) permitting new construction as in-kind consideration. Congress acted on these proposals, but did not implement all of the Department’s proposals. In the Floyd D. Spence National Defense Authorization Act for Fiscal Year 2001, Congress significantly expanded the services’ authority to accept in- kind consideration. Specifically, Congress expanded authorized use of in- kind consideration to include additional services, such as construction of new facilities. It also allowed service secretaries to accept in-kind consideration at any property or facility under their control, rather than at only the installation leasing the property. Congress made similar changes to the authority to use funds from the special accounts for cash payments. These accounts may now be used for acquisition of facilities and facilities operation support, as well as construction of new facilities. The Department of Veterans Affairs has had similar enhanced leasing authority since 1991, which permits it to lease property for the purpose of generating revenues to improve services to veterans. Appendix II provides examples of Veterans Affairs’ use of their enhanced leasing authority. The services have leased real property on their bases for years as a means to reduce infrastructure and base operating costs. The military services leased space for banks, credit unions, ATMs, storage, schools, and agricultural grazing. These projects served the needs of the community and generated modest amounts of revenues. From 1994 to 1998, the services entered into approximately 1,800 real property leases that generated $21.9 million. Agricultural and grazing leases comprised 36 percent of the total number of leases for all military Departments combined. Revenues from agricultural and grazing leases are retained to cover administrative costs of leasing and to cover financing of land-use management programs at installations. Service revenues from leasing increased to $10.7 million in fiscal year 1999, $14.4 million in fiscal year 2000, and $12.9 million in fiscal year 2001. These amounts do not include in-kind consideration. The Department estimates that, including in-kind consideration, the services collected the equivalent of $22 to $25 million annually for the 3-year period. This figure represents approximately one- third of 1 percent of the Department’s $6 billion facilities capital improvement requirement. In the Department of Defense’s 1999 leasing report to Congress, the Department estimated that the expanded leasing authority could increase its revenues to $100 to $150 million annually after the first 5 years of the expanded authority. To accomplish this, the Department expects the services to focus on larger and more complex leases, to include major development projects that involve real estate developers who lease the property, restore it, and in turn sublease the property to a variety of tenants. The services are also exploring ways to share in future revenues with developers as part of lease agreements. The services continue to use 10 U.S.C. 2667 for traditional leases, but the services have made limited efforts to use the expanded leasing authority, which was expected to result in larger and more complex projects. As a result, the services may not meet the Department’s expectations of generating $100 to $150 million in annual revenues from the expanded authority. To date, the Army has completed two projects based on the expanded authority and has identified several other potential projects. (See app. III for more details on the projects currently under consideration by the Army using the expanded leasing authority.) On June 21, 2001, the Army signed a lease, with a developer who will restore several buildings at Fort Sam Houston, San Antonio, Texas, and sublease them. The Army expects to receive $253 million in revenue over the next 50 years from this project. On September 26, 2001, the Army signed a 33-year lease with the University of Missouri, which will develop and sublease 62 acres on Fort Leonard Wood, Missouri, for a technology park. The University of Missouri Systems and the State of Missouri will provide an initial investment of $4 million. According to an Army official, the Army will receive $500 annually for each sub-leased acre and 7 percent of the net proceeds collected from the sublease. This project will enhance the installation’s mission by enabling industry and academic partners to co-locate on the installation. According to Air Force and Navy officials, they are in the process of identifying potential projects that would use the expanded leasing authority. However, as noted below, the services have cited numerous factors that were likely to limit the use of the expanded leasing authority. The services have identified a number of factors that have limited the use of the expanded leasing authority and that could adversely affect the program in the future. However, the Army’s leasing experience indicates that leasing opportunities may exist notwithstanding these factors. A significant factor that could hinder the use of the expanded leasing authority may be the absence of strong program emphasis, including detailed program guidance and goals and a financial system capable of tracking revenues and in-kind consideration from leases. The services have identified a number of impediments that have made them cautious about using the expanded leasing authority. Some of their concerns have been raised by the congressionally authorized round of base realignment and closure scheduled for 2005 and force protection issues resulting from the events of September 11. Other potential impediments include mission compatibility, budget implications, legal requirements, and resource availability. Navy and Air Force officials cite the planned base realignment and closure process authorized for 2005 as one of the main obstacles to expanding their leasing efforts in the short-term. The services are hesitant to lease property on bases that might be subject to a base realignment and closure action or may be required for future mission needs. Navy officials expressed concern about having to terminate leases if an installation should subsequently be subject to a base realignment and closure action, citing costs it had inccurred under similar circumstances. For example, Navy officials stated they had to maintain the utilities at a base in El Toro, California, for a year after the base was closed because it could not terminate a lease without incurring substantial costs. The services also want to reserve property in the event that they have to accommodate missions from realigned or closed installations. An Air Force official stated that leased property might be needed for missions transferring from realigned or closed bases. The official added that the Air Force has significantly reduced its infrastructure by demolishing over 300,000 square feet of property and closing 31 bases in the previous base closure rounds. Thus, according to Air Force officials, there are not as many opportunities to lease. Also, according to a Navy official, laws and regulations, community interest, and the local congressional delegation can limit the service’s ability to terminate leases, making the leases nearly irreversible commitments of assets. Consequently, the Navy and Air Force are hesitant to use the expanded leasing authority until the future base realignment and closure process identifies those installations that will be closed or realigned. All three of the services expressed concern about the impact of leasing on force protection and base security issues. For example, according to services officials, installation commanders are concerned about their ability to strengthen security and limit base access if they open their bases to private tenants. The events of September 11, 2001, have increased their concerns about these issues. Despite the need for increased emphasis on force protection and security concerns, the services may be able to mitigate, according to an Army official, the impact of force protection issues somewhat by locating leasing projects near the periphery of an installation. In addition, heightened security may be an advantage in attracting lease projects. The Army, for example, has chosen to emphasize the benefits of heightened security to potential leasing clients. It will promote additional security measures as a benefit in future lease proposals. Service officials also cited mission compatibility as an obstacle to leasing projects for some installations. These officials indicate that they do not want to create new missions on their installations and have issued memoranda stating that leases should be consistent with an installation’s mission. However, according to service officials, finding projects that are mission related could be difficult. For example, the Navy has turned down proposals to lease and develop naval property because the leases would have conflicted with the Navy’s mission. According to a Navy official, the Navy is concerned that the more involved it becomes with a community through leasing projects, the less flexibility and control it has over its installation. Furthermore, some officials have indicated that generating interest in leasing Navy properties is difficult because naval buildings and property generally have very specific uses and may not be easily modified to satisfy the needs of potential lessees. For example, naval shipyards have very specialized missions that limit the activities that can be conducted on them. Similarly, Air Force officials are concerned that joint use of an installation could compromise its mission. For example, if a private firm wanted to lease an aircraft hangar and allow private aircraft to take off and land, the Air Force would then have to coordinate those private flights with its flight schedule, which could affect its mission. The services may be able to overcome this issue by subleasing to government contractors and other service units that are currently leasing private property, and they may be able to find lease projects with private companies that reinforce their missions. For example, the Army is hoping to take advantage of San Antonio’s medical industry to identify and attract leases at Fort Sam Houston, which has a large medical mission. Similarly, the Army is structuring a lease that would provide for a joint-use hot-test track in Yuma, Arizona. The Army would be able to test the durability of its vehicles in desert conditions in conjunction with a private vehicle manufacturer. Section 2667 of title 10, United States Code, provides that at least 50 percent of lease revenues must be returned to the installation where the lease is located. The Department and services view this as an incentive to installation commanders to identify and lease available property to help defray base operating support costs. However, according to the Department of Defense’s leasing report to Congress, the Office of Management and Budget and Congress may view lease revenues as a substitute for direct appropriations and may reduce the Department’s appropriation dollar-for-dollar by the increase in lease revenue. The Department may in turn reduce the services’ budgets thus reducing or eliminating an incentive for them to identify and lease additional properties. This disincentive may be offset to some extent by the expanded leasing authority’s broadened use of in-kind consideration to include additional services and new construction. In addition, in-kind consideration can remain at the installation, which allows the installation to immediately realize all of the benefits. Department and service information has indicated that the McKinney- Vento Homeless Assistance Act, National Historic Preservation Act, and environmental indemnification issues can discourage leasing of their facilities. However, others suggest that this is not always the case. The Department’s report to Congress stated that the McKinney-Vento Homeless Assistance Act could discourage leasing. The McKinney-Vento Act mandates that providers for the homeless must be given an opportunity to use federal real property identified as not currently needed for mission requirements. However, service officials have found that while compliance with the McKinney-Vento Act is a time-consuming process, it does not necessarily impede their ability to respond to leasing opportunities. Also, service officials stated that the National Historic Preservation Act could hinder the leasing program. Many of the buildings on the three services’ installations are historic properties and are protected by the National Historic Preservation Act. For example, the Army estimates that approximately 15,000 of its properties are listed on or eligible for the National Register of Historic Places. Service officials stated that numerous regulations on maintenance, preservation, and restoration of historic properties could limit a leasing project’s success by limiting the developer’s ability to attract tenants. Specifically, at Army property leased at Fort Sam Houston (where, according to Army officials, 57 percent of the buildings are historic), the state historic preservation office wanted the developer to retain walls that were blocking natural light. Through lengthy negotiations, the developer was able to convince preservation officials that they would be unable to secure a sufficient number of tenants to make the lease profitable, without the ability to design space with natural light. While the National Historic Preservation Act can create issues for a developer, the act can also be an incentive because of the potential tax credits a developer can receive for restoring historic property. For example, even though leased property is involved, the developer at Fort Sam Houston is seeking tax credits for the property, which he stated might be used to lower the rental rate of its sub-leases, including leases to the federal government. If the developer at Fort Sam Houston is successful, the tax credits could potentially attract developers and lessees to installations that would otherwise not be considered desirable due to location or other issues. In addition, a DOD official stated that the services could capitalize on their historic property by marketing the property to the film industry, which could generate substantial revenue. The Department’s report and service officials stated that environmental indemnification (i.e., to hold harmless the lessee from liability for Department-related environmental contamination) is also a significant barrier to leasing. According to DOD, there is a perception in the private sector that military property has a high potential for being contaminated, even when current studies indicates otherwise. Potential lessees who are concerned about the liability for cleanup costs under the Comprehensive Environmental Response, Compensation, and Liability Act may be discouraged from leasing military property. Although the Department has stated that under any leasing arrangement it is responsible for all environmental cleanup cost, potential lessees may be reluctant to engage in an agreements without indemnification. Limited resources, including well-trained personnel and funds may also impede the services’ leasing efforts. The expanded authority, to the extent used or envisioned, could involve large, complex real estate transactions that require experienced legal and real estate personnel to complete. According to service officials, the lack of a sufficient number of staff members with the necessary real estate knowledge is an impediment to expanding leasing efforts. Service officials added that installation commanders—whom the services are relying on to identify potential leasing opportunities and prepare business cases supporting the project— have not received any formal training and lack the necessary expertise. In addition, according to service officials, the services are reluctant to assume the risks of expending their limited resources on potential projects that may not result in a lease. According to Navy officials, the Navy has a limited number of trained real estate staff and many of them are involved with higher priority issues, such as utility privatization and its Ford Island development project. One Navy official stated that installation personnel are not trained to identify, complete, and manage leasing projects. Air Force officials expressed similar concerns, stating that installation commanders are not currently trained to manage property. Likewise, the Air Force has also dedicated its personnel to other priority projects, including its demonstration project at Brooks Air Force Base, limiting its ability to undertake additional leasing projects. To address the shortage of personnel, the Army at Fort Sam Houston converted its Total Quality Management Office into a business practices office to handle the leasing project. As a result of these efforts, the Army has projected that it will receive approximately $253 million in revenue over the lease’s 50-year term. This has led the Army to encourage its major commands to establish business practices offices at their installations to handle, among other things, leasing functions. The services lack a strong program emphasis that would encourage the use of the expanded leasing authority. They have neither identified program goals in terms of desired savings and timelines for achieving them, nor have they developed implementation guidance. In addition, the services have not accurately accounted for existing lease revenue, and their accounting systems are not equipped to track in-kind consideration. The military services control and are responsible for the operation of their installations; therefore, DOD has essentially deferred to the military services to establish program guidance for implementing the expanded leasing authority. However, the services have not developed this guidance to include measurable goals and detailed guidance that will enable them to take full advantage of the expanded authority. Each service has issued policy memoranda outlining the goals and purpose of the expanded leasing authority, but these memoranda generally reiterate the Office of the Secretary of Defense’s overall goal of expanding leasing efforts to reduce base operating costs and to improve installation efficiency. The services’ memoranda do not identify measurable goals in terms of the amount of savings the services want to achieve and when they want to achieve them. Additionally, the services have not provided detailed guidance, such as criteria for identifying facilities and space available for leasing, nor a methodology to identify those projects that have the potential to return the most lease revenues. For example, although the Army is aggressively pursuing lease projects that could potentially generate millions of dollars in savings, it has not selected these projects systematically or determined how many projects it can successfully undertake given the complex nature of the leases. Instead of a formal management framework, the services have relied upon installation commanders to identify and pursue leasing opportunities. Service officials admit that many installation commanders may not be adequately prepared to handle these duties, as they lack personnel with both real estate and leasing experience. Where leasing has occurred, historically, the services have not accurately accounted for lease revenue, and their accounting systems are not equipped to track in-kind consideration received in lieu of cash. In the case of cash revenues, the law provides that at least 50 percent of the revenue must be returned to the installation where the leased property is located. According to service officials, returning lease revenue acts as an incentive to installation commanders to identify and lease as much of their real property as is reasonable. However, we found that two of the three services were unable to accurately track cash revenues, which resulted in installations from two services receiving less revenue than anticipated or no revenue: In fiscal year 2000, Air Force installations reported that they should have about $2.1 million in lease revenue. However, DOD’s treasury leasing account records showed that Air Force installations only deposited about $1.4 million in the account, resulting in a $700,000 discrepancy, which the Air Force has yet to reconcile. Because of the $700,000 discrepancy, the Air Force pro-rated the lease revenues, giving each installation and its major command a share of the $1.4 million, but not necessarily 100 percent of the revenue they had generated, which is ordinarily Air Force policy. The Air Force is unable to identify whether the $700,000 was collected or incorrectly recorded into another account. According to Department records, the treasury leasing account showed that the Navy deposited $4.7 million in lease revenue in fiscal year 2000. However, the Navy’s Financial Management and Budget Office is unable to identify the source of 48 deposits totaling approximately $800,000, and, therefore, the Navy has not distributed $2.35 million (50 percent of the revenues) back to the installations, as provided by 10 U.S.C. 2667. However, the Navy has already distributed 50 percent of the revenue for other service needs. Each of the services lacks a service-wide accounting system to track in- kind consideration, which can be accepted in lieu of cash payments and can include construction of new facilities or maintenance and repair services. In-kind consideration currently accounts for about 40 percent of lease revenue, according to Department of Defense officials, who encourage in-kind consideration as an alternative to cash revenue. While the expanded authority gave the services the ability to use in-kind consideration at any installation under its control, the lack of visibility over in-kind consideration at the service level limits the services’ ability to accurately account for a significant portion of its leasing revenue. Consequently, the services may be unable to determine the success of their leasing efforts, which may limit their ability to use in-kind consideration for their highest priority projects. In an era of reduced budgets for infrastructure and base operating costs, leasing can be an important tool that allows the services to help meet some of their most critical infrastructure needs. We recognize that the impediments identified by the services are likely to limit the use of the expanded leasing authority somewhat. However, recent and on-going efforts by the Army to use the expanded authority suggest that with sufficient emphasis, opportunities may still exist to lease under this expanded authority. At present, the program lacks needed emphasis and planning in terms of formally developed goals or detailed guidance. Consequently, the services are not systematically identifying potential lease projects and have not determined how many of these projects to undertake at one time. In addition, revenue from existing lease projects has not been accurately accounted for and distributed to installations, which may discourage installation commanders from initiating projects under the expanded leasing authority. In-kind consideration represents approximately 40 percent of the benefits from these existing leases and is expected to increase. However, the services have not accounted for these receipts, which may prevent the services from assessing the full extent of their success. To make better use of the expanded leasing authority, we recommend that the Secretary of Defense require the Under Secretary of Defense for Acquisition, Technology, and Logistics to work with the Secretaries of the Air Force, Army, and Navy to place greater emphasis on an expanded leasing program in the form of program goals and measurements to monitor progress in reducing infrastructure and base operations costs; specific program guidelines, such as criteria for project selection; and accurately accounting for all cash revenues and developing a new system to account for in-kind consideration to ensure that all of the benefits from leasing are captured. As you know, 31 U.S.C. 720 requires the head of a federal agency to submit a written statement of the actions taken on our recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform not later than 60 days after the date of this report. A written statement must also be sent to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. In commenting on a draft of this report, the Deputy Under Secretary of Defense (Installations and Environment) generally concurred with most of our recommendations while partly concurring with the recommendation to develop program goals and measurements to monitor progress in reducing infrastructure and base operations costs. While partially concurring with this recommendation, the Department noted two policy memoranda already issued identifying goals and objectives, and noted that while it believes there are opportunities to increase the number and the scope of leases under the expanded authority, it is dependent on a number of factors affecting individual projects. It was noncommittal regarding development of additional program goals. We found that while the Department has issued general program guidance, that guidance does not contain specific goals and measurements for tracking progress in using the expanded leasing authority. We continue to believe that despite likely limitations in the program, as outlined in the report, development of goals and measurements to monitor progress is important to fostering increased program emphasis. This is especially important, because as noted in the Department’s comments, use of the expanded leasing authority is a key element of the Department’s efficient facilities initiative. Therefore, we are making no change to our recommendation. The Department also provided observations on the challenges it faces in identifying and implementing projects under the expanded leasing authority. Among them are such challenges as identifying land and/or buildings that have sufficient market appeal to attract one or more private sector or public entities, as well as be of sufficient size and scope to permit a sufficient rate of return to the developer for the project to be accomplished. We agree that these are significant challenges along with others we have pointed out in our report. The Department’s comments are included in this report as appendix IV. We are sending copies of this report to the Secretary of the Army, the Secretary of the Navy, the Secretary of the Air Force, the services’ offices of installations and environment, and interested congressional committees and members. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-8412 if you or your staff has any questions concerning this report. Major contributors to this report are listed in appendix V. To assess the extent to which the services have used the expanded leasing authority since its enactment in fiscal year 2001, we identified current leasing projects and talked to services officials and private sector representatives. In addition, we visited an installation that has a project using the expanded leasing authority. Specifically, we interviewed officials at the Office of the Secretary of Defense; Office of the Assistant Secretary of the Navy Installations and Environment, Rosslyn, Virginia; Office of the Assistant Secretary of the Army Installations and Environment, Washington, D.C.; Office of the Assistant Chief of Staff for Installation Management, Army Headquarters, Washington, D.C.; Office of the Deputy Assistant Secretary of Army, Resource Analysis and Business Practices, Washington, D.C.; Naval Facilities Engineering Command Headquarters, Washington, D.C.; Air Force Real Estate Agency, Bolling Air Force Base, Washington, D.C.; Naval Sea Systems Command, Washington, D.C.; and Naval Air Systems Command, Crystal City, Virginia. In addition, we visited Fort Sam Houston, San Antonio, Texas, where the Army recently completed a lease under the expanded leasing authority. To identify factors that limited the services’ use of the new authority, we identified and reviewed congressional legislation, Department of Defense and the services’ memoranda, policies, and procedures, and accounting records. In addition to the officials listed above, we interviewed officials in the Office of Management and Budget, Washington, D.C.; Department of Defense’s Office of the Comptroller, Washington, D.C.; Army Financial Management and Comptroller Office, Washington, D.C.; Navy Financial Management and Budget Office, Washington, D.C.; Air Force Financial Management and Budget Office; Air Force’s Civil Engineers Operation and Maintenance Division, Crystal City, Virginia; Defense Financial and Accounting Services, Denver, Colorado and Cleveland, Ohio; U.S. Army Corp of Engineers, Washington, D.C.; and private sector representatives from Roy F. Weston, Inc., and Orion Partners, Inc., San Antonio, Texas. We conducted our review between June 2001 and April 2002 in accordance with generally accepted government auditing standards. Title 38 U.S.C., sections 8161-69, provides the Department of Veterans Affairs the authority to leverage its property into needed facilities, services, or resources. Veterans Affairs can lease underutilized property for up to 75 years in return for cash or in-kind consideration. Veterans Affairs has used its enhanced-use leasing authority to lease space for children’s centers, offices, parking garages, health centers, residential lodging, and other purposes. For example, in Texas, Veterans Affairs leased unused land to a developer on its medical campus. The developer constructed a Veterans Affairs regional office building as well as other buildings and rented space to commercial businesses. According to Veterans Affairs, the project saved $6 million on construction, $10 million in operating costs, and produced annual revenue for Veterans Affairs through revenue sharing with the developer. In Indiana, Veterans Affairs leased underutilized land and facilities to the state to use as a psychiatric care facility. Veterans Affairs estimates it obtained $15.7 million in financial benefits and $5 million per year in operational savings. The lease revenue that Veterans Affairs receives from both sites funds veterans programs. Veterans Affairs enhanced-use leasing authority has been in effect since 1991 and has been extended four times to a current expiration of December 31, 2011. To date, Veterans Affairs has approved 16 projects, and 11 have been completed. According to Veterans Affairs officials, these projects have been successful and the Department’s experiences could provide a framework for the Department of Defense’s expanded leasing efforts. In addition, Veterans Affairs has studied over 100 initiatives, of which more than 50 are “in development.” The Army has four projects under consideration using the expanded leasing authority that it believes will reduce base operating costs, including Picatinney Arsenal, Rock Island Arsenal, Yuma Proving Grounds, and Walter Reed Army Medical Center. The Army proposed leasing four buildings for joint military and commercial use as laboratories, light manufacturing, education/training, and administrative facilities at Picatinney Arsenal. On July 2, 2001, Picatinney Arsenal signed a conditional lease with a developer. The installation and developer are currently drafting their Business and Leasing Plan for approval by the Department of the Army. At Rock Island Arsenal, the Army has identified 14 buildings to lease under a joint use agreement, which would allow a private sector developer to market the facilities. Rock Island Arsenal is currently developing its Notice of Availability to lease, which serves as the basis for selecting a developer. At Yuma Proving Ground, the Army is seeking a private-sector developer to construct a Hot Weather Test Complex. Yuma Proving Ground is currently drafting a Report of Availability. As in-kind consideration, Yuma Proving Ground would also be able to use the test track for mission requirements. At Walter Reed Army Medical Center, the Army has identified one building to be restored and utilized as an office building for health care, or biomedical research organization, which is compatible with Walter Reed’s mission. The building has historical significance and needs to be preserved. Estimated renovation costs are over $40 million, which the Army envisions would be incurred by the developer. In addition to those named above, Tommy Baril, Tinh Nguyen, Robert Ackley, Susan Woodward, and Nicole Carpenter made key contributions to this report.
The military services face significant challenges in addressing facility sustainment, restoration, and modernization needs with limited funds. These challenges are magnified by the 20 to 25 percent of the Department of Defense's (DOD) real property that it views as not being needed to meet current mission requirements, but that adds to costs. To reduce these costs and acquire additional resources to maintain its facilities, DOD has developed a multi-part strategy involving base realignment and closure, housing and utility privatization, competitive sourcing of non-inherently governmental functions, and demolition of facilities that are no longer needed. Although the services continue to use the leasing authority provided for traditional type of leases, they have made limited efforts to use the expanded leasing authority enacted by Congress in fiscal year 2001. The services have identified a number of impediments that have limited the use of the expanded leasing authority and that could adversely affect the program in the future.
Our preliminary analysis of NSF data indicates that for fiscal years 2000 through 2016, indirect costs on NSF awards ranged from 16 percent to 24 percent of the total annual amounts the agency awarded, though the percentage generally has increased since 2010. In fiscal year 2016, for example, NSF awards included approximately $1.3 billion budgeted for indirect costs, or about 22 percent of the total $5.8 billion that NSF awarded. Figure 1 illustrates annual funding for indirect costs over the 17- year period. NSF officials told us that variation in indirect costs from year to year can be due to a variety of factors such as (1) differences in the types of organizations awarded, (2) the types of activities supported by the individual awards—research vs. individuals or students vs. infrastructure, (3) the type of research activity, and (4) the disciplinary field of awards. As part of our ongoing review, we plan to conduct further analysis of these factors. The indirect costs on individual awards varied more widely than the year- to-year variations for each award. Most NSF awards included indirect costs in their budgets—for example, about 90 percent of the 12,013 awards that NSF made in fiscal year 2016 included indirect costs. Our preliminary analysis of those awards indicated that the proportion of funding for indirect costs ranged from less than 1 percent to 59 percent of the total award. Our preliminary analysis also indicates that average indirect costs budgeted on awards varied across types of awardees. NSF’s data categorized awardees as federal; industry; small business; university; or other, a category that includes nonprofits and individual researchers. Figure 2 illustrates our preliminary analysis on the average percentage of total awards budgeted for indirect costs in fiscal year 2016, by type of awardee. As shown in the figure, our preliminary analysis indicates that university awardees had the highest average indirect costs—about 27 percent of the total amount of awards—and federal awardees had the lowest average indirect costs—about 8 percent of the total amount of awards. According to NSF officials, certain types of projects, such as those carried out at universities, typically involve more indirect costs than others. The officials said that this is because, for example, of universities’ expense of maintaining scientific research facilities, which may be included as an indirect cost in awards. Because universities receive the bulk of NSF’s award funding and have relatively high indirect costs, our preliminary analysis of NSF data indicates that universities accounted for about 91 percent of the approximately $1.3 billion budgeted for indirect costs in fiscal year 2016. As previously noted, NSF does not set the indirect cost rate for the universities to which it makes awards, as those rates are set by HHS or DOD. Our analysis also showed that awards to organizations for which NSF had cognizance (e.g., nonprofits, professional societies, museums, and operators of large shared-use facilities) had lower average budgeted indirect costs than awards to organizations for which other federal agencies had cognizance. As shown in figure 3, our preliminary analysis of NSF data indicates that, on average, NSF budgeted about 23 percent of award amounts for indirect costs on awards to organizations for which NSF did not have indirect cost cognizance and about 11 percent for indirect costs on awards to organizations for which NSF had cognizance. Our preliminary observations show that in fiscal year 2016, NSF made most of its awards to organizations for which it did not have cognizance. Our preliminary observations show that among the approximately 110 organizations for which NSF has cognizance, negotiated indirect cost rates can vary because of the type of work being funded by awards and the ways in which different organizations account for their costs. For example, salaries for administrative or clerical staff may be included as either an indirect or direct cost, as long as they are consistently treated across an organization’s awards. Our preliminary analysis of the rate agreement case files for nine organizations in a nongeneralizable sample of files we reviewed showed the rates ranged from 5.5 percent to 59.8 percent. An organization may choose to budget indirect costs for an award at a level close to its negotiated indirect cost rate for the organization, or it may choose to budget the costs differently. For example, one of the organizations in our sample had a negotiated indirect cost rate of 51 percent in fiscal year 2016. In that year, the organization received one NSF award for $535,277 that budgeted $180,772 for indirect costs (or about 34 percent of the award)—a calculated indirect cost rate on the award of about 51 percent. Another organization in our sample had a negotiated indirect cost rate of 5.5 percent in 2016, and one of its NSF awards in fiscal year 2016, for $1,541,633, did not budget for any indirect costs. We based our preliminary analyses of indirect costs on data from the budgets of NSF awards—the only available NSF data on indirect costs. According to NSF officials, prospective awardees are required to provide direct and indirect costs in their proposed budgets using the organization’s negotiated indirect cost rate. After an award is made, NSF does not require awardees to report information about indirect costs when requesting reimbursements for work done on their awards for projects. Specifically, NSF’s Award Cash Management $ervice—NSF’s online approach to award payments and post-award financial processes—does not collect data about indirect costs, although NSF is permitted to do so by OMB guidance. According to NSF officials, doing so would unnecessarily increase the reporting burden on awardees. Our preliminary review of NSF’s guidance for setting indirect cost rates and a nongeneralizable sample of nine indirect cost rate files indicates that NSF has issued internal guidance that includes procedures for staff to conduct timely and uniform reviews of indirect cost rate proposals, collect data, set rates, and issue letters to formalize indirect cost rate agreements. However, we also found that NSF staff did not consistently apply the guidance. The guidance also includes tools and templates for staff to use to consistently set rates and procedures for updating the agency’s tracking system for indirect cost rate proposals. However, in our preliminary analysis of NSF guidance, we found that (1) NSF staff did not consistently follow guidance for updating the tracking system, (2) the guidance did not include specific procedures for how supervisors are to document their review of staff workpapers, and (3) NSF had not updated the guidance to include procedures for implementing new provisions issued under the Uniform Guidance. In 2008, NSF created a database to track indirect cost rate proposals and developed guidance for updating the tracking database with proposal information. However, our preliminary analysis of reports from the tracking database indicates that NSF staff have not consistently followed the guidance for updating the tracking database with current data about the awardees for which NSF has cognizance and the status of indirect cost rate proposals. For example, in our preliminary analysis, we identified eight awardees for which NSF was no longer the cognizant agency but that still appeared in the tracking database on a list of agencies from which proposals were overdue. Cognizance for these awardees had been transferred to other agencies from 2009 through 2014. In addition, we identified 46 instances in which NSF staff had not followed the guidance to update the tracking database to reflect the current status of awardees’ proposals, including instances in which the tracking database was missing either the received date or both the received and closed dates. In addition, while NSF’s guidance describes procedures that staff are to follow for setting indirect cost rates, it only includes broad procedures for supervisory review—NSF’s primary quality control process for setting indirect cost rates. The guidance does not describe specific steps that supervisors need to take when reviewing the work performed by staff when setting indirect cost rates, nor does it include how supervisors should annotate the results of their reviews in the workpapers. In our preliminary review of a nongeneralizable sample of nine NSF rate files, we did not find any documentation that a supervisor had reviewed the work performed by staff, such as verifying that staff had checked the accuracy of the total amount of awards over which an awardee’s indirect costs were distributed. Such reviews are meant to provide reasonable assurance that only allowable, allocable, and reasonable indirect costs have been proposed and that such costs have been appropriately allocated to federally funded awards. Moreover, our preliminary observations on NSF’s guidance indicates that it does not include procedures for implementing certain aspects of OMB’s Uniform Guidance, which became effective for grants awarded on or after December 26, 2014. For example, a new provision under the Uniform Guidance allows research organizations that currently have a negotiated indirect cost rate to apply for a onetime extension of that rate for a period of up to 4 years; however, NSF guidance does not specify criteria for NSF staff to determine the circumstances under which an awardee could be given an extension. In closing, I would note that we are continuing our ongoing work to examine NSF’s data on indirect costs for its awards over time and its implementation of its guidance for setting indirect cost rates for organizations over which it has cognizance. NSF awards billions of dollars to organizations each year and, given the constrained budget environment, it is essential that NSF ensures efficient and effective use of federal science funding. We look forward to continuing our work to determine whether NSF actions may be warranted to promote this objective. We plan to issue a report in fall 2017. Chairwoman Comstock, Chairman LaHood, Ranking Members Lipinski and Beyer, and Members of the Subcommittees, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staff members have any questions concerning this testimony, please contact me at (202) 512-3841 or neumannj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals who made key contributions to this testimony include Joseph Cook, Assistant Director; Kim McGatlin, Assistant Director; Rathi Bose; Ellen Fried; Ruben Gzirian; Terrance Horner, Jr.; David Messman; Lillian Slodkowski; Kathryn Smith; and Sara Sullivan. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
NSF awards billions of dollars to institutions of higher education (universities), K-12 school systems, industry, science associations, and other research organizations to promote scientific progress by supporting research and education. NSF reimburses awardees for direct and indirect costs incurred for most awards. Direct costs, such as salaries and equipment, can be attributed to a specific project that receives an NSF award. Indirect costs are not directly attributable to a specific project but are necessary for the general operation of an awardee organization, such as the costs of operating and maintaining facilities. For certain organizations, NSF also negotiates indirect cost rate agreements, which are then used for calculating reimbursements for indirect costs. Indirect cost rate negotiations and reimbursements are to be made in accordance with federal guidance and regulation and NSF policy. This testimony reflects GAO's preliminary observations from its ongoing review that examines (1) what is known about NSF's indirect costs for its awards over time, and (2) the extent to which NSF has implemented guidance for setting indirect cost rates for organizations. GAO reviewed relevant regulation, guidance, and agency documents; analyzed budget data, a nongeneralizable sample of nine indirect cost rate files from fiscal year 2016 selected based on award funding; and interviewed NSF officials. GAO's preliminary analysis of National Science Foundation (NSF) data indicates that for fiscal years 2000 through 2016, indirect costs on NSF awards ranged from 16 percent to 24 percent of the total annual amounts awarded, though the percentage generally has increased since 2010 (see fig.). NSF officials stated that variation in indirect costs from year to year can be due to a variety of reasons, such as the types of organizations awarded and the disciplinary field of awards. GAO's observations are based on data from planned budgets on individual NSF awards, rather than actual indirect cost expenditures, because NSF does not require awardees to report indirect costs separately from direct costs in their reimbursement requests. According to NSF officials, collecting such information would unnecessarily increase the reporting burden on awardees. NSF has issued guidance for negotiating indirect cost rate agreements that includes procedures for staff to conduct timely and uniform reviews of indirect cost rate proposals. GAO's preliminary review of NSF's guidance and a sample of nine indirect cost rate files found that (1) NSF staff did not consistently follow guidance for updating the agency's tracking database with current data about some awardees, (2) the guidance did not include specific procedures for how supervisors are to document their review of staff workpapers, and (3) NSF had not updated the guidance to include procedures for implementing certain aspects of Office of Management and Budget guidance that became effective for grants awarded on or after December 26, 2014, such as the circumstances in which NSF can provide an awardee with an extension of indirect cost rates. GAO is not making any recommendations in this testimony but will consider making recommendations, as appropriate, as it finalizes its work.
Two NNSA offices, NA-23 and NA-25, documented management controls for almost all of their contracts that we reviewed; but the third office, NA- 24, could not provide us with the complete records necessary to document these controls. Similarly, NA-23 and NA-25 provide their technical reviewers and contract managers with procedural guidance that assists in maintaining these controls, while NA-24 did not provide this type of guidance. In addition, NA-23 and NA-25 maintain the key contract documents at headquarters and the national laboratories, respectively, in such a way that the records are quickly accessible for active monitoring by contract and program managers, as evidenced by their ability to provide us with key contract records. Two NNSA offices, NA-23 and NA-25, documented management controls for most of their contracts that we reviewed. As shown in table 1, for eight of the nine contracts we reviewed from these two offices, NA-23 staff and national laboratory officials who manage NA-25’s contracts provided us with complete records of deliverables and invoices as well as evidence that technical reviewers and contract officers reviewed and approved deliverables and invoices, respectively. (For the ninth contract, which involved comprehensive physical protection upgrades to a strategic rocket forces site in Russia, Oak Ridge National Laboratory did not provide complete documentation of approvals for deliverables.) For example, the two contracts we reviewed from NA-23—which are designed to construct or refurbish fossil-fuel plants for the Russian cities of Zheleznogorsk and Seversk so that each city can shut down its plutonium-producing nuclear reactor that it currently uses to generate heat and electricity—involve multiple contractors in the United States and Russia. Despite being by far the largest contracts by dollar value in our sample ($390 million for Seversk and $570 million for Zheleznogorsk—the next largest contract was valued at $29 million), NA-23 headquarters provided us with, among other things, complete documentation of all invoices; photographs of the deliverables (i.e., construction work) completed to date; and evidence of the reviews and approvals of the invoices and payments to the foreign contractors and subcontractors. NA-23 also provided us with detailed breakdowns of work (called Work Breakdown Structures), work authorizations, and cost evaluations for each project. The documentation NA-23 provided us was among the most complete and organized of all the contracts we reviewed. An NA-23 official told us that this office makes efforts to specify in acute detail the work to be done and the costs for that work because this enables the office to effectively monitor and maintain a degree of control over the work of foreign contractors and subcontractors. NA-25 officials also provided us with complete documentation of management controls for the contracts they manage. As shown in table 1, for six of the seven contracts we reviewed, the national laboratories that manage these contracts provided complete records of deliverables and invoices as well as evidence that technical reviewers at the national laboratories and/or contract officers at the national laboratories and/or NA-25 reviewed and approved the deliverables and invoices, respectively. (The seventh contract is the Oak Ridge contract, mentioned above.) For example, for the two contracts we reviewed that Brookhaven National Laboratory manages for NA-25, each invoice on the contracts received at least one approval from technical reviewers at the laboratory, and each financial transaction received two approvals from contract managers. In another contract involving the purchase of nuclear detection devices for deployment in Russia, the national laboratory managing the contract— Pacific Northwest—provided us with purchase orders for the contract as well as a receipt of delivery so that we could verify that the goods purchased had reached their destination prior to final delivery in Russia. Both NA-23 and NA-25 maintain copies of key records, such as deliverables and invoices within quick access to program and contract managers, as evidenced by the ability of each office to provide these records to us. NA-23 maintains these records at headquarters, while NA-25 maintains the records at the national laboratories that provide the day-to-day management over the contracts. However, it is important to note that the laboratories should be able to provide NNSA managers with complete and quick access to contract records, as the national laboratories are contractors to DOE, and it is NNSA that is ultimately responsible for monitoring the nonproliferation projects. NA-23 and NA-25 each apply formal, procedural guidance that assists technical reviewers and contract managers in maintaining management controls. For example, because the contracts involve capital procurement or acquisitions exceeding $5 million, NA-23 must apply the rules and procedures specified in DOE Order 413.3, Project Management for the Acquisition of Capital Assets. NA-23’s contract managers receive program guidance through work authorizations signed by an authorized official at NNSA headquarters and guidance on the payment process via DOE’s Contract Specialist Guide 42.8, which specifies procedures for review and approval of vouchers and invoices so that contract managers will handle them in a timely and efficient manner. According to NA-23 officials, Federal Acquisition Regulations also stipulate many of the specific steps that NA- 23 must undertake in the planning, implementation, and review of the contracts that make up the Seversk and Zheleznogorsk projects. NA-25 developed its own procedural guidance, known as the Project Management Document, for technical reviewers and contract officials. This guide provides instructions on, among other things, project planning, funds management, reporting of a project’s ongoing progress and costs, contract management, and procedures for putting important contract data into NA- 25’s Program Management Information System. Finally, according to NNSA’s Director of Policy and Internal Controls Management and an NNSA official in charge of acquisitions in the Office of Defense Nuclear Nonproliferation, neither NA-23 nor NA-25 perform periodic reviews of their management control processes, although NNSA’s Office of Engineering and Project Support, at the outset of NA-23’s projects, did perform a general review of NA-23’s management controls. GAO’s management control guidelines state that agencies should monitor and regularly evaluate their control activities to ensure that they are still appropriate and working as intended. NA-24 could not provide evidence of the records necessary to document its management controls. Despite our numerous inquiries from January 2005 to June 2005 and discussions with agency officials—including one with NNSA’s Principal Assistant Deputy Administrator—the documentation we received on seven of the nine contracts we examined from this office was either incomplete or did not provide a clear audit trail that we could follow. (For the two other contracts, one managed by Brookhaven National Laboratory and one managed by Los Alamos National Laboratory, laboratory officials provided complete documentation of management controls.) For example, for one contract managed by the Idaho National Laboratory involving the discovery of bioactive compounds in Russia that may be used in watershed protection or carbon sequestration: Ten of the 35 invoices did not include a document showing that NA-24 had authorized payment to the Russian contractors. Fourteen invoices on this contract did not include evidence that Idaho National Laboratory’s technical reviewer for the contract approved the deliverable on which the invoice was based. For another contract managed through NA-24 headquarters, Foundation for Russian American Economic Cooperation (FRAEC), NA-24 provided us with documentation, but we were able to determine very little about the contract on the basis of this documentation because of the following reasons: there appeared to be no explanation of the linkages between the work products outlined in the contract, the deliverables, and the invoices, and we received fewer than half of the invoices for the contract and fewer than one-fifth of the deliverables for the contract. Senior officials with NA-24 told us that it doesn’t need to keep copies of key contract documents because the documents are maintained at the national laboratories managing the contracts and accessible to NA-24. However, the fact NA-24 was unable to obtain complete sets of records on seven of the nine contracts we reviewed suggests otherwise. In addition, NA-24 did not provide us with formal, written guidance that provides managers with the procedures on how to process and maintain key contract records, and the office appears to rely on each national laboratory to provide its own procedural guidelines. Finally, NA-24, like NA-23 and NA-25, does not perform periodic reviews of its management control processes. GAO’s management control guidelines state that agencies should monitor and regularly evaluate their control activities to ensure that they are still appropriate and working as intended. On the basis of our review of the contracts, it appears that, if an NNSA program office provides its managers with procedural guidance on how to maintain management controls, the office does a better job at implementing and documenting these management controls. In our view, procedural guidance enables program and contract managers to implement and document management controls in a systematic way, as evidenced by the fact that NA-23 and NA-25 each use procedural guidance and were able to document their controls. In addition, maintaining managers’ quick and complete access to key contract records—regardless of whether the records are located at the national laboratory or NNSA headquarters—appears to coincide with maintaining and documenting management controls. Officials at NA-24 told us that they have access to all contract records through the laboratories that manage their contracts, yet the office was unable to provide us with these records. Finally, as required by GAO standards for management controls, periodic reviews of management controls would help the NNSA offices that we reviewed determine whether they are adhering to their management controls and whether these controls are relevant and effective. For example, if NA-24 had performed a review of its management control procedures, it might have discovered that it did not have on hand complete sets of invoices and approvals of deliverables for each of the office’s nonproliferation contracts. To ensure that each NNSA office that we reviewed maintains complete documentation of its management controls, we recommend that the Secretary of Energy, working with the Administrator of the National Nuclear Security Administration, require NNSA to take the following three actions: each NNSA office use formal, procedural guidance that clearly states NNSA’s program managers maintain quick access to key contract records such as deliverables and invoices that relate to management controls, regardless of whether the records are located at a national laboratory or headquarters; and NNSA perform periodic reviews of its management control processes to be certain that each program office’s management controls can be documented and remain appropriate and effective. We provided the Department of Energy’s National Nuclear Security Administration (NNSA) with a draft of this report for its review and comment. NNSA’s written comments are presented as appendix III. In its written comments, NNSA notes that it will undertake a series of actions in response to our recommendations, but also states that our report creates an incorrect perception that the Defense Nuclear Nonproliferation Program, particularly NA-24, is lacking in the application of management controls. In their comments to our draft report, NNSA’s major points are as follow: 1. We reviewed only contracts from a portion of NA-24, Global Initiatives for Proliferation Prevention (GIPP), and we did not receive complete documentation from NA-24 because we did not speak to the procurement officer for the GIPP program; 2. NA-24 has implemented “very stringent” management controls; 3. We mischaracterize the management controls on two contracts—one managed by the Idaho National Laboratory (INL) and the other managed by NA-24 headquarters staff; 4. For an NA-25 contract managed by Oak Ridge National Laboratory, we received incomplete documentation because of an initial misunderstanding by the laboratory rather than a control problem within NA-25, and that managers at Oak Ridge sent us the missing documents on August 16, 2005. 5. NA-25 does conduct external program management reviews of its management controls through a Technical Survey Team (TST). First, regarding the scope of our review, at the outset of our work, we asked NA-24 for a list of all its contracts in Russia and other countries that were active from the beginning of June 2001 through the end of June 2004, then took a nonprobability sample of those contracts. We did not intentionally focus solely on NA-24’s GIPP program. Regardless, as we state in the report, results from nonprobability samples cannot be used to make inferences about a population, and our statements about NA-24 relate to its ability to document the management controls for the contracts we examined. Regarding NNSA’s comment that we did not meet with the GIPP procurement officer, it is unclear to us why NNSA is making this point. For most of the contracts we reviewed, NA-24 provided us with documents directly. After providing NA-24 with a fact sheet stating that we received incomplete documentation for seven of the nine the contracts we reviewed, we met with NA-24’s Assistant Deputy Administrator on June 27, 2005, who provided us with additional documentation that she characterized as “complete”. After a thorough review, we found much of this additional documentation to be incomplete, indecipherable, and often duplicative of the information we had already been given earlier in our review. On August 17, 2005, after submitting our draft report to NNSA for comment, we met again with the Assistant Deputy Administrator as well as the Associate Assistant Deputy Administrator, the Principal Assistant Deputy Administrator for Defense Nuclear Nonproliferation, and the GIPP procurement officer. At this meeting, the procurement officer provided us with no new documentation, and the NA-24 officials again asserted that the documents they provided us in June were “complete”. Furthermore, during the meeting, while discussing some of the documents that we found to be missing, we asked the officials to produce a few of these documents at random from the materials they gave us in June. In most cases, they were unable to do so. In fact, in the case of one missing document, an NA-24 official stated that it “had to be somewhere in there” (included in the materials submitted in June), but it was not. Second, we disagree with NA-24’s contention that it has implemented “very stringent” management controls. Although NNSA cites a number of actions that NA-24 has taken to strengthen its controls, the fact remains that NA-24 did not provide us with sufficient documentation for seven of the nine contracts we reviewed despite numerous requests from us to do so. For example, on one contract managed by the Y-12 National Security Complex, rather than providing a “real-time” technical reviewer’s approval for each deliverable, NA-24 provided us with a single email from the technical reviewer, dated June 24, 2005, that purported to cover two years’ worth of missing approvals. This post-hoc approval does not represent a satisfactory management control. Based on what NA-24 provided us, we believe that the office’s controls for some contracts we reviewed are weak. In our view, NA- 24 needs to implement actions that address and strengthen the specific management controls we identify in the report, and we are encouraged that NNSA has agreed to implement our recommendations. Third, for the INL-managed contract, NNSA asserts that it provided us in June with the documentation we sought. However, the documents were indecipherable to us because most were unlabeled, presented in no particular chronological order, and rely on emails in which neither the sender’s nor recipients’ positions were identified. For the headquarters- managed contract, NNSA contends that, at our meeting on August 17, 2005, it explained how the process of deliverables and invoices for this contract (providing assistance to the Foundation for Russian American Economic Cooperation) differs from the processes of other contracts we examined. Although this may be the case, the documents that NA-24 provided did not clearly explain or illustrate those processes. More importantly, the documents that NA-24 provided comprised fewer than one-half of the deliverables and one-fifth of the invoices that we identified in June as missing. Fourth, although we have fewer concerns about NA-25’s management controls, in the case of one of the contracts managed by Oak Ridge National Laboratory, managers provided acceptable documentation of technical reviewers’ approvals on only three of six deliverables. Although we agree with NNSA that officials at the laboratory did not initially provide us with complete documentation of technical approvals, as we state in the report, NNSA is ultimately responsible for the controls on its contracts, even if the contracts are managed day-to-day by someone else. In addition, the documentation that officials at the laboratory sent us on August 16, 2005, did not provide all the information that was missing. Rather, they provided documentation of one additional technical review and resubmitted materials that we had already informed Oak Ridge managers did not represent acceptable documentation. As a result, we stand by our recommendation that NNSA perform periodic reviews of management controls for each of the three offices we examined. Fifth, regarding NNSA’s statement that TST performs external reviews of NA-25’s management controls, it is important to note that the TST is a panel of experts established by DOE to determine if DOE-installed security systems at Russian nuclear sites meet departmental guidelines for effectively reducing the risk of nuclear theft. Moreover, we spoke to NNSA’s Director of Policy and Internal Controls Management on August 26, 2005, and he agreed that, while the TST provides useful project oversight, it does not provide the kind of comprehensive review of program management controls examined in our review. More importantly, during the course of our work, NA-25 did not provide evidence of any reviews of their management controls. Finally, we believe it is important to note that management controls were most evident on NA-24 and NA-25 contracts managed by national laboratories from which we were able to obtain all the necessary documentation directly, without any NNSA headquarters involvement. This was especially noteworthy in the case of NA-24 because both of this office’s contracts that we determined demonstrated effective management controls were managed by a national laboratory – Brookhaven or Los Alamos – and in both cases we obtained all the necessary documents directly from the laboratory managers. To assess the effectiveness of the NNSA’s management controls of its nonproliferation projects, we identified the three offices within NNSA that currently oversee and manage the nonproliferation projects that fell within the scope of our work: (1) the Office of Nuclear Risk Reduction (designated by NNSA as NA-23), (2) the Office of Nonproliferation and International Security (NA-24), and (3) the Office of International Material Protection and Cooperation (NA-25). To identify what constitutes management controls, we consulted two GAO documents: Standards for Internal Controls in the Federal Government and Internal Control Management and Evaluation Tool. Using these documents, we focused on the management controls associated with NNSA’s nonproliferation contracts. More specifically, we examined the supervisory actions designed to ensure that the work performed under the contract (known as “deliverables”) meet the contract’s specifications and that payments for that work receive required approvals and reach the intended recipients. To do this, we sought from NNSA the following documents for each of the contracts we reviewed: contract deliverables (or summary of the deliverable – as practicable); technical approval from an NNSA or national laboratory official for each invoices for all deliverables; documentation of an independent payment authorization and review for each deliverable, which should include at least one signature from a national laboratory financial office official supervising the contract and/or one official at NNSA headquarters; approval letter from NNSA or the national laboratory authorizing the final payment of the contractors for a deliverable (as applicable); a guide to the process each national laboratory uses to approve a deliverable and authorize payment. To select a nonprobability sample of contracts, we obtained, from the three offices in NNSA, a list of all their nonproliferation contracts in Russia and other countries that were active from the beginning of June 2001 through the end of June 2004. We identified contracts whose value exceeded $1 million and arranged them in descending dollar value. We chose the 15 contracts with the largest dollar value, subject to the constraints that (1) no more than two contracts come from NA-23, 7 contracts from NA-24, and six contracts from NA-25 and (2) a single national laboratory manages no more than three contracts in our sample. We chose these constraints so that (1) the mix of contracts among the three offices in our sample roughly reflected the mix of contracts among the three offices in our original list and (2) the sample would reflect a diversity of laboratories. Finally, we included one contract from each of the three remaining laboratories that were not yet included in our sample, bringing our final list to 18 contracts. To ensure that NNSA’s lists of its nuclear nonproliferation contracts were sufficiently reliable for our purposes, we obtained responses to a series of questions covering issues such as data entry, data access, quality-control procedures, and the accuracy and completeness of the data for the eight databases from which these data were drawn. Follow-up questions were added, whenever necessary. Based on our review of this work, we found these data to be sufficiently reliable for the purpose of using these lists to select a nonprobability sample of 18 contracts for review. In addition to contract documents, we also interviewed NNSA officials in Washington, D.C., and Germantown, Maryland. To gather information about the contracts we selected for review, we traveled to Brookhaven National Laboratory in New York and Los Alamos and Sandia National Laboratories in New Mexico to meet with laboratory officials and program, project, procurement, and contract managers to explain our review; to learn about NNSA programs and projects, as well as procedures for implementing management controls; and to determine the kinds of project documents we would need. To gather information and documents on the remaining contracts, on the basis of what we learned during these trips, we sent detailed written communications and conducted teleconferences, numerous and frequent in some cases, with the requisite staff in headquarters and at other national laboratories. Specifically, we contacted staff at the Lawrence Berkeley and Lawrence Livermore National Laboratories in California, the Oak Ridge National Laboratory and the Y-12 National Security Complex in Tennessee, the Pacific Northwest National Laboratory in Washington, and the Idaho National Laboratory. We focused on identifying the controls implemented to ensure that former Soviet Union partners meet contract terms before the invoices for the deliverables are paid. After we gathered and evaluated all the available documentation from NNSA headquarters and the various national laboratories for each contract, we assessed the contracts on the basis of the completeness of their documentation and overall evidence of the implementation of management controls. We placed each contract in one of three categories: (1) contracts for which all or almost all of the necessary documentation was provided— especially the major contract documents (statement of work and task orders), deliverables, and technical and independent contractual/financial approvals of each deliverable—providing clear evidence of the systematic implementation of management controls throughout the life-cycle of the contract; (2) contracts for which most of the documents were provided, suggesting that systematic implementation of management controls may be occurring but not clearly indicating as much; and (3) contracts for which there were significant gaps in necessary documentation, providing no basis to conclude that systematic management controls are implemented. We conducted our review between May 2004 and July 2005 in accordance with generally accepted government auditing standards. We are sending copies of this report to interested congressional committees and the Secretary of Energy. We will also make copies available to others upon request. In addition, the report will be available on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact Mr. Aloise at (202) 512-3841 or aloisee@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO contacts and staff acknowledgements are listed in appendix IV. The following table lists all NNSA contracts that we reviewed. Luch - Task Order 1 – Blend-down HEU to LEU PBZ-C2 Comprehensive Physical Protection Upgrades to Russian Navy Site CBC-B2 Comprehensive Physical Protection Upgrades to Russian Navy Site COMP2BR-Comprehensive Physical Protection System Upgrades Aquila – Purchase of Equipment to enhance the monitoring of nuclear materials FRAEC –Technical and administrative assistance in planning, establishing, and operating the international development centers. Pipe Coating Facility (#63544) – Establish a production facility within the city of Snezhinsk for the production of insulated pipes. T2-0192-RU – Development of a 3-D neutronics optimization algorithm for application to cancer treatment facilities. Nuclear Non-Proliferation Center –The Analytical Center for Nuclear Non- Proliferation will carry out research on several projects, including a Quarterly Information Bulletin and Internet Analysis and creation of an Internet page. T2-0186-RU – Development of a Tank Retrieval and Closure Demonstration Center in the Mining and Chemical Combine (MCC) to help in retrieval and processing of radioactive wastes generated during production of plutonium for nuclear weapons. T2-0194-RU – The use of new technologies to process important Ti alloys for medical applications and aerospace industries. T2-0204-UA – Welding and Reactive Diffusion Joining (RDJ) repair technologies for use in aircraft and land-based turbine engines. SAIC/P.O. # 14436 – Nuclear material detectors for border guards T2-0244-RU – Development of an explosives detection system. T2-2002-RU – Discovery of bioactive compounds from selected environments in Russia for products such as watershed protection and carbon sequestration. Luch Task Order 1 – Blend-down HEU to LEU PBZ C2 – Security Upgrades to Russian Navy Site CBC B2 – Security Upgrades to Russian Navy Site TVZ01– Minatom Guard Railcar Procurement COMP2BR – Comprehensive Physical Protection Systems Upgrades Aquila – Enhanced nuclear materials monitoring systems FRAEC – Technical and administrative assistance in planning, establishing, and operating international development centers. In addition, Nancy Crothers, Greg Marchand, Judy Pagano, Daren Sweeney, and Kevin Tarmann made significant contributions to this report.
The National Defense Authorization Act for FY 2004 mandated that we assess the management of threat reduction and nonproliferation programs that the Departments of Defense and Energy each administer. The objective of this report is to assess how the Department of Energy's National Nuclear Security Administration (NNSA) implements management controls, which we define here to be the processes ensuring that work done under a contract meets contract specifications and that payments go to contractors as intended. Two NNSA offices, the Office of Nuclear Risk Reduction (designated by NNSA as NA-23) and International Material Protection and Cooperation (NA- 25), documented management controls for almost all of their contracts that we reviewed, but the third office, the Office of Nonproliferation and International Security (NA-24), did not document controls for most of their contracts because they could not provide the required documentation. More specifically, for eight of the nine NA-23 and NA-25 contracts we reviewed, the NA-23 headquarters staff and the laboratory staff that manage the contracts for NA-25 provided to us complete records of deliverables and invoices, as well as evidence that technical officials reviewed and approved the deliverables and contract officers reviewed and approved the invoices. (For the ninth contract, NA-25 provided us with incomplete documentation of its controls.) In addition, NA-23 and NA-25 each apply procedural guidance that assists managers in maintaining these controls. However, according to an NNSA official, none of the three offices currently perform periodic reviews to ensure their existing management controls remain appropriate. In contrast, we were unable to determine if NA-24 implements management controls because, for seven of the nine contracts we reviewed, the documentation it provided to us was in most cases either incomplete or it provided no clear audit trail that we could follow. (Documentation was complete for the eighth and ninth contracts.) The types of documents that were missing varied across and within some contracts. In addition, NA-24 does not provide its contract managers with procedural guidance on how to maintain its management controls, nor does it perform a periodic review of its controls to ensure the controls are effective and appropriate.
Insurance is a mechanism for spreading risk over time, across large geographical areas, and among industries and individuals. While private insurers assume some financial risk when they write policies, they employ various strategies to manage risk so that they earn profits, limit potential financial exposure, and build capital needed to pay claims. For example, insurers charge premiums for coverage and establish underwriting standards, such as refusing to insure customers who pose unacceptable levels of risk or limiting coverage in particular geographic areas. Insurance companies may also purchase reinsurance to cover specific portions of their financial risk. Reinsurers use similar strategies as primary insurers to limit their risks. Under certain circumstances, the private sector may determine that a risk is uninsurable. For example, homeowner policies typically do not cover flood damage because private insurers are unwilling to accept the risk of potentially catastrophic losses associated with flooding. In other instances, the private sector may be willing to insure a risk, but at rates that are not affordable to many property owners. Without insurance, affected property owners must rely on their own resources or seek out disaster assistance from local, state, and federal sources. In situations where the private sector will not insure a particular type of risk, the public sector may create markets to ensure the availability of insurance. The federal government operates two such programs—the NFIP and the FCIC. NFIP provides insurance for flood damage to homeowners and commercial property owners in more than 20,000 communities. Homeowners with mortgages from federally regulated lenders on property in communities identified as being in high flood risk areas are required to purchase flood insurance on their dwellings. Optional, lower cost flood insurance is also available under the NFIP for properties in areas of lower flood risk. NFIP offers coverage for both the property and its contents, which may be purchased separately. FCIC insures agricultural commodities on a crop-by-crop and county-by-county basis based on farmer demand and the level of risk associated with the crop in a given region. Major crops, such as grains, are covered in almost every county where they are grown, while specialty crops such as fruit are covered only in some areas. Participating farmers can purchase different types of crop insurance and at different levels. Assessments by leading scientific bodies suggest that climate change could significantly alter the frequency or severity of weather-related events, such as drought and hurricanes. Leading scientific bodies report that the Earth warmed during the twentieth century— 1.3 degrees Fahrenheit (0.74 degrees Celsius) from 1906 to 2005 according to a recent IPCC report—and is projected to continue to warm for the foreseeable future. While temperatures have varied throughout history, triggered by natural factors such as volcanic eruptions or changes in the earth’s orbit, the key scientific assessments we reviewed have generally concluded that the observed increase in temperature in the past 100 years cannot be explained by natural variability alone. In recent years, major scientific bodies such as the IPCC, NAS, and the United Kingdom’s Royal Academy have concluded that human activities are significantly increasing the concentrations of greenhouse gases and, in turn, global temperatures. Assuming continued growth in atmospheric concentration of greenhouse gases, the latest assessment of computer climate models projects that average global temperatures will warm by an additional 3.2 to 7.2 degrees Fahrenheit (1.8 to 4.0 degrees Celsius) during the next century. Based on model projections and expert judgment, the IPCC reported that future increases in the earth’s temperature are likely to increase the frequency and severity of many damaging extreme weather-related events (summarized in table 1). The IPCC recently published summaries of two of the three components of its Fourth Assessment Report. The first, in which IPCC summarized the state of the physical science, reports higher confidence in projected patterns of warming and other regional-scale features, including changes in wind patterns, precipitation, and some aspects of extreme events such as drought, heavy precipitation events, and hurricanes. The second, in which IPCC addresses climate impacts and vulnerabilities, reported that the potential societal impacts from changes in temperature and extreme events vary widely across sector and region. For example, although the IPCC projects moderate climate change may increase yields for some rain-fed crops, crops that are near their warm temperature limit or depend on highly-used water resources face many challenges. Additionally, local crop production in any affected area may be negatively impacted by projected increases in the frequency of droughts or floods. Furthermore, the IPCC stated that the economic and social costs of extreme weather events will increase as these events become more intense and/or more frequent. Rapidly-growing coastal areas are particularly vulnerable, and the IPCC notes that readiness for increased exposure in these areas is low. These reports have not been publicly released in their entirety, but are expected sometime after May 2007. In addition to the IPCC’s work, CCSP is assessing potential changes in the frequency or intensity of weather-related events specific to North America in a report scheduled for release in 2008. According to a National Oceanic and Atmospheric Administration official and agency documents, the report will focus on weather extremes that have a significant societal impact, such as extreme cold or heat spells, tropical and extra-tropical storms, and droughts. Importantly, officials have said the report will provide an assessment of the observed changes in weather and climate extremes, as well as future projections. Based on an examination of loss data from several different sources, we found that insurers incurred about $321.2 billion in weather-related losses from 1980 through 2005. In particular, as illustrated in Figure 1, our analysis found that weather-related losses accounted for 88 percent of all property losses paid by insurers during this period. All other property losses, including those associated with earthquakes and terrorist events, accounted for the remainder. Weather-related losses varied significantly from year to year, ranging from just over $2 billion in 1987 to more than $75 billion in 2005. Private insurers paid $243.5 billion—over 75 percent of the total weather- related losses we reviewed. The two major federal insurance programs— NFIP and FCIC—paid the remaining $77.7 billion of the $321.2 billion in weather-related loss payments we reviewed. NFIP paid about $34.1 billion, or about 11 percent of the total weather-related loss payments we reviewed during this period. As illustrated in Figure 2, claims averaged about $1.3 billion per year, but ranged from $75.7 million in 1988 to $16.7 billion in 2005. Since 1980, FCIC claims totaled $43.6 billion, or about 14 percent of all weather-related claims during this period. As illustrated in Figure 3, FCIC losses averaged about $1.7 billion per year, ranging from $531.8 million in 1987 to $4.2 billion in 2002. The largest insured losses in the data we reviewed were associated with catastrophic weather events. Notably, crop insurers and other property insurers both face catastrophic weather-related risks, although the nature of the events for each is very different. In the case of crop insurance, drought accounted for more than 40 percent of weather-related loss payments from 1980 to 2005, and the years with the largest losses were associated with drought. Taken together, though, hurricanes were the most costly event in the data we reviewed. Although the United States experienced an average of only two hurricanes per year from 1980 through 2005, weather-related claims attributable to hurricanes totaled more than 45 percent of all weather-related losses—almost $146.8 billion. Moreover, as illustrated in Table 2, these losses appear to have increased during the past three decades. Several recent studies have commented on the apparent increases in hurricane losses during this time period, and weather-related disaster losses generally, with markedly different interpretations. Some argue that loss trends are largely explained by changes in societal and economic factors, such as population density, cost of building materials, and the structure of insurance policies. Others argue that increases in losses have been driven by changes in climate. To address the issue, Munich Re—one of the world’s largest reinsurance companies—and the University of Colorado’s Center for Science and Technology Policy Research jointly convened a workshop in Germany in May 2006 to assess factors leading to increasing weather-related losses. The workshop brought together a diverse group of international experts in the fields of climatology and disaster research. Workshop participants agreed that long-term records of disaster losses indicate that societal change and economic development are the principal factors explaining weather-related losses. However, participants also agreed that changing patterns of extreme events are drivers for recent increases in losses, and that additional increases in losses are likely, given IPCC’s projections. The close relationship between the value of the resource exposed to weather-related losses and the amount of damage incurred may have ominous implications for a nation experiencing rapid growth in some of its most disaster-prone areas. AIR Worldwide, a leading catastrophe modeling firm, recently reported that insured losses should be expected to double roughly every 10 years because of increases in construction costs, increases in the number of structures, and changes in their characteristics. AIR’s research estimates that, because of exposure growth, probable maximum catastrophe loss—an estimate of the largest possible loss that may occur, given the worst combination of circumstances—grew in constant 2005 dollars from $60 billion in 1995 to $110 billion in 2005, and it will likely grow to over $200 billion during the next 10 years. Major private and federal insurers are responding differently to the prospect of increasing weather-related losses associated with climate change. Many large private insurers are incorporating both near and longer-term elements of climatic change into their risk management practices. On the other hand, for a variety of reasons, the federal insurance programs have done little to develop the kind of information needed to understand the programs’ long-term exposure to climate change. Catastrophic weather events pose a unique financial threat to private insurers’ financial success because a single event can cause insolvency or a precipitous drop in earnings, liquidation of assets to meet cash needs, or a downgrade in the market ratings used to evaluate the soundness of companies in the industry. To prevent these disruptions, the American Academy of Actuaries (AAA)—the professional society that establishes, maintains, and enforces standards of qualification, practice, and conduct for actuaries in the United States—recommends, among other steps, that insurers measure their exposure to catastrophic weather-related risk. In particular, AAA emphasizes the shortcomings of estimating future catastrophic risk by extrapolating solely from historical losses, and endorses a more rigorous approach that incorporates underlying trends and factors in weather phenomena and current demographic, financial, and scientific data to estimate losses associated with various weather- related events. In our interviews with eleven of the largest private insurers operating in the U.S. property casualty insurance market, we sought to determine what key private insurers are doing to estimate and prepare for risks associated with potential climatic changes arising from natural or human factors. Representatives from each of the 11 major insurers we interviewed told us they incorporate near-term increases in the frequency and intensity of hurricanes into their risk estimates. Six specifically attributed the higher frequency and intensity of hurricanes to a 20- to 40-year climatic cycle of fluctuating temperatures in the north Atlantic Ocean, while the remaining five insurers did not elaborate on the elements of climatic change driving the differences in hurricane characteristics. In addition to managing their aggregate exposure on a near-term basis, some of the world’s largest insurers have also taken a longer-term strategic approach to changes in catastrophic risk. Six of the eleven private insurers we interviewed reported taking one or more additional actions when asked if their company addresses climatic change in their weather-related risk management processes. These activities include monitoring scientific research (4 insurers), simulating the impact of a large loss event on their portfolios (3 insurers), and educating others in the industry about the risks of climatic change (3 insurers), among others. Moreover, major insurance and reinsurance companies, such as Allianz, Swiss Re, Munich Re, and Lloyds of London, have published reports that advocate increased industry awareness of the potential risks of climate change, and outline strategies to address the issue proactively. NFIP and FCIC have not developed information on the programs’ longer- term exposure to the potential risk of increased extreme weather events associated with climate change as part of their risk management practices. The goals of the key federal insurance programs are fundamentally different from those of private insurers. Whereas private insurers stress the financial success of their business operations, the statutes governing the NFIP and FCIC promote affordable coverage and broad participation by individuals at risk over the programs’ financial self-sufficiency by offering discounted or subsidized premiums. Also unlike the private sector, the NFIP and the FCIC have access to additional federal funds during high-loss years. Thus, neither program is required to assess and limit its catastrophic risk strictly within its ability to pay claims on an annual basis. Instead, to the extent possible, each program manages its risk within the context of its broader purposes in accordance with authorizing statutes and implementing regulations. Nonetheless, an improved understanding of the programs’ financial exposure is becoming increasingly important. Notably, the federal insurance programs’ liabilities have grown significantly, which leaves the federal government increasingly vulnerable to the financial impacts of catastrophic events. Data obtained from both the NFIP and FCIC programs indicate the federal government has grown markedly more exposed to weather-related losses. Figure 4 illustrates the growth of both program’s exposure from 1980 to 2005. For NFIP, the program’s total coverage increased fourfold in constant dollars during this time from about $207 billion to $875 billion in 2005 due to increasing property values and a doubling of the number of policies from 1.9 million to more than 4.6 million. The FCIC has effectively increased its exposure base 26-fold during this period. In particular, the program has significantly expanded the scope of crops covered and increased participation. The main implication of the exposure growth for both the programs is that the magnitude of potential claims, in absolute terms, is much greater today than in the past. Neither program has assessed the implications of a potential increase in the frequency or severity of weather-related events on program operations, although both programs have occasionally attempted to estimate their aggregate losses from potential catastrophic events. For example, FCIC officials stated that they had modeled past events, such as the 1993 Midwest Floods, using current participation levels to inform negotiations with private crop insurers over reinsurance terms. However, NFIP and FCIC officials explained that these efforts were informal exercises, and were not performed on a regular basis. Furthermore, according to NFIP and FCIC officials, both programs’ estimates of weather-related risk rely heavily on historical weather patterns. As one NFIP official explained, the flood insurance program is designed to assess and insure against current— not future—risks. Over time, agency officials stated, this process has allowed their programs to operate as intended. However, unlike private sector insurers, neither program has conducted an analysis of the potential impacts of an increase in the frequency or severity of weather-related events on continued program operations in the long-term. While comprehensive information on federal insurers’ long-term exposure to catastrophic risk associated with climate change may not inform the NFIP’s or FCIC’s day-to-day operations, it could nonetheless provide valuable information for the Congress and other policy-makers who need to understand and prepare for fiscal challenges that extend well beyond the two programs’ near-term operational horizons. We have highlighted the need for this kind of strategic information in recent reports that have expressed concern about the looming fiscal imbalances facing the nation. In particular, we observed that, “Our policy process will be challenged to act with more foresight to take early action on problems that may not constitute an urgent crisis but pose important long-term threats to the nation’s fiscal, economic, security, and societal future.” The prospect of increasing program liabilities, coupled with expected increases in frequency and severity of weather events associated with climate change, would appear to fit into this category. Agency officials identified several challenges that could complicate their efforts to assess these impacts at the program level. Both NFIP and FCIC officials stated there was insufficient scientific information on projected impacts at the regional and local level to accurately assess their impact on the flood and crop insurance programs. However, members of the insurance industry have analyzed and identified the potential risks climatic change poses to their business, despite similar challenges. Moreover, as previously discussed, both the IPCC and CCSP are expected to release significant assessments of the likely effect of increasing temperatures on weather events in coming months. The experience of many private insurers, who must proactively respond to longer-term changes in weather-related risk to remain solvent, suggests the kind of information that needs to be developed to make sound strategic decisions. Specifically, to help ensure their future viability, a growing number of private insurers are actively incorporating the potential for climate change into their strategic level analyses. In particular, some private insurers have run a variety of simulation exercises to determine the potential business impact of an increase in the frequency and severity of weather events. For example, one insurer simulated the impact of multiple large weather events occurring simultaneously. We believe a similar analysis could provide Congress with valuable information about the potential scale of losses facing the NFIP and FCIC in coming decades, particularly in light of the programs’ expansion over the past 25 years. We believe that the FCIC and NFIP are uniquely positioned to provide strategic information on the potential impacts of climate change on their programs—information that would be of value to key decision makers charged with a long-term focus on the nation’s fiscal health. Most notably, in exercising its oversight responsibilities, the Congress could use such information to examine whether the current structure and incentives of the federal insurance programs adequately address the challenges posed by potential increases in the frequency and severity of catastrophic weather events. While the precise content of these analyses can be debated, the activities of many private insurers already suggest a number of strong possibilities that may be applicable to assessing the potential implications of climate change on the federal insurance programs. Accordingly, our report being released today recommends that the Secretary of Agriculture and the Secretary of Homeland Security direct the Administrator of the Risk Management Agency and the Under Secretary of Homeland Security for Emergency Preparedness to analyze the potential long-term implications of climate change for the FCIC and the NFIP, respectively, and report their findings to the Congress. This analysis should use forthcoming assessments from the Climate Change Science Program and the Intergovernmental Panel on Climate Change to establish sound estimates of expected future conditions. Both agencies expressed agreement with this recommendation. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or other Members of the Committee may have. For further information about this testimony, please contact me, John Stephenson, at 202-512-3841 or stephensonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Contributors to this testimony include Steve Elstein, Assistant Director; Chase Huntley; Alison O’Neill; and Lisa Van Arsdale. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Weather-related events in the United States have caused tens of billions of dollars in damages annually over the past decade. A major portion of these losses is borne by private insurers and by two federal insurance programs-- the Federal Emergency Management Agency's National Flood Insurance Program (NFIP), which insures properties against flooding, and the Department of Agriculture's Federal Crop Insurance Corporation (FCIC), which insures crops against drought or other weather disasters. In this testimony, GAO (1) describes how climate change may affect future weather-related losses, (2) provides information on past insured weather-related losses, and (3) determines what major private insurers and federal insurers are doing to prepare for potential increases in such losses. This testimony is based on a report entitled Climate Change: Financial Risks to Federal and Private Insurers in Coming Decades are Potentially Significant (GAO-07-285) being released today. Key scientific assessments report that the effects of climate change on weather-related events and, subsequently, insured and uninsured losses, could be significant. The global average surface temperature has increased over the past century and climate models predict even more substantial, perhaps accelerating, increases in temperature in the future. Assessments by key governmental bodies generally found that rising temperatures are expected to increase the frequency and severity of damaging weather-related events, such as flooding or drought, although the timing and magnitude are as yet undetermined. Additional research on the effect of increasing temperatures on weather events is expected in the near future. Taken together, private and federal insurers paid more than $320 billion in claims on weather-related losses from 1980 to 2005. Claims varied significantly from year to year--largely due to the effects of catastrophic weather events such as hurricanes and droughts--but have generally increased during this period. The growth in population in hazard-prone areas and resulting real estate development have generally increased liabilities for insurers, and have helped to explain the increase in losses. Due to these and other factors, federal insurers' exposure has grown substantially. Since 1980, NFIP's exposure nearly quadrupled to nearly $1 trillion in 2005, and program expansion increased FCIC's exposure 26-fold to $44 billion. Major private and federal insurers are both exposed to the effects of climate change over coming decades, but are responding differently. Many large private insurers are incorporating climate change into their annual risk management practices, and some are addressing it strategically by assessing its potential long-term industry-wide impacts. In contrast, federal insurers have not developed and disseminated comparable information on long-term financial impacts. GAO acknowledges that the federal insurance programs are not profit-oriented, like private insurers. Nonetheless, a strategic analysis of the potential implications of climate change for the major federal insurance programs would help the Congress manage an emerging high-risk area with significant implications for the nation's growing long-term fiscal imbalance.
When providers at VAMCs determine that a veteran needs outpatient specialty care, they request and manage consults using VHA’s clinical consult process. Clinical consults include requests by physicians or other providers for both clinical consultations and procedures. A clinical consultation is a request seeking an opinion, advice, or expertise regarding evaluation or management of a patient’s specific clinical concern, whereas a procedure is a request for a specialty procedure such as a colonoscopy. Clinical consults are typically requested by a veteran’s primary care provider using VHA’s electronic consult system. The consult process is governed by VHA’s national consult policy. The policy requires VAMCs to manage consults using a national electronic consult system,to veterans. and requires VAMC staff to provide timely and appropriate care Once a provider sends a request, VHA requires specialty care providers to review it within 7 days and determine whether to accept the consult. If the specialty care provider accepts the consult—determines the consult is needed and is appropriate—an appointment is to be made for the patient to receive the consultation or procedure. In some cases, a provider may discontinue a consult for reasons such as the care is not needed, the patient refuses care, or the patient is deceased. In other cases the specialty care provider may determine that additional information is needed, and will send the consult back to the requesting provider, who can resubmit the consult with the needed information. Once the appointment is held, VHA’s policy requires the specialty care provider to appropriately document the results of the consult, which would then close out the consult as completed in the electronic system. VHA’s current guideline is that consults should be completed within 90 days of the request. If an appointment is not held, staff are to document why they were unable to complete the consult. According to VHA’s consult policy, VHA central office officials have oversight responsibility for the consult process, including the measurement and monitoring of ongoing performance. In 2012, VHA created a database to capture all consults system-wide and, after reviewing these data, determined that the data were inadequate for monitoring purposes. One issue identified was the lack of standard processes and uses of the electronic consult system across VHA. For example, in addition to requesting consults for clinical concerns, the system also was being used to request and manage a variety of administrative tasks, such as requesting patient travel to appointments. Additionally, VHA could not accurately determine whether patients actually received the care they needed, or if they received the care in a timely fashion. According to VHA officials, approximately 2 million consults (both clinical and administrative consults) were unresolved for more than 90 days. Subsequently, VA’s Under Secretary for Health convened a task force to address these and other issues regarding VHA’s consult system, among other things. In response to the task force recommendations, in May 2013, VHA launched the Consult Management Business Rules Initiative to standardize aspects of the consult process, with the goal of developing consistent and reliable information on consults across all VAMCs. This initiative required VAMCs to complete four specific tasks between July 1, 2013, and May 1, 2014: Review and properly assign codes to consistently record consult requests in the consult system; Assign distinct identifiers in the electronic consult system to differentiate between clinical and administrative consults; Develop and implement strategies for requesting and managing requests for consults that are not needed within 90 days—known as “future care” consults; and Conduct a clinical review as warranted, and as appropriate, close all unresolved consults—those open more than 90 days. At the time of our December 2012 review, VHA measured outpatient medical appointment wait times as the number of days elapsed from the patient’s or provider’s desired date, as recorded in the VistA scheduling system by VAMCs’ schedulers. In fiscal year 2012, VHA had a goal of completing new and established patient specialty care appointments within 14 days of the desired date. VHA established this goal based on its performance reported in previous years.achieving its wait time goals, VHA includes wait time measures—referred to as performance measures—in its budget submissions and To facilitate accountability for performance reports to Congress and stakeholders.measures, like wait time goals, have changed over time. Officials at VHA’s central office, VISNs, and VAMCs all have oversight responsibilities for the implementation of VHA’s scheduling policy. For example, each VAMC director, or designee, is responsible for ensuring that clinics’ scheduling of medical appointments complies with VHA’s scheduling policy and for ensuring that all staff who can schedule medical appointments in the VistA scheduling system have completed the required VHA scheduler training. In addition to the scheduling policy, VHA has a separate directive that establishes policy on the provision of telephone service related to clinical care, including facilitating telephone access for medical appointment management. Our ongoing work has identified examples of delays in veterans receiving requested outpatient specialty care at the five VAMCs we reviewed. We found consults that were not processed in accordance with VHA timeliness guidelines—for example, consults were not reviewed within 7 days, or completed within 90 days. We also found consults for which veterans did not receive the requested outpatient specialty care, and those for which the requested specialty care was provided, but were not properly closed in the consult system. VHA requires specialty care providers to review consults within 7 days and determine whether to accept the consult. Of the 150 consults we reviewed, the consult records indicated that VAMCs did not meet the 7-day requirement for 31 consults (21 percent). For one VAMC, nearly half the consults were not reviewed and triaged within 7 days. Officials at this VAMC cited a shortage of providers needed to review and triage the consults in a timely manner. Our ongoing work also has identified that for the majority of the 150 consults we reviewed, VAMCs did not meet VHA’s timeliness guideline that care be provided and consults completed within 90 days. We found that veterans received care for 86 of the 150 consults we reviewed (57 percent), but in only 28 of the consults (19 percent) veterans received care within 90 days of the date the consult was requested. For the remaining 64 consults (43 percent), the patients did not receive the requested care. Specific examples of consults that were not completed in 90 days, or were closed without the patients being seen, include: For 3 of 10 gastroenterology consults we reviewed for one VAMC, we found that between 140 and 210 days elapsed from the dates the consults were requested to when the patients received care. For the consult that took 210 days, an appointment was not available within 90 days and the patient was placed on a waiting list before having a screening colonoscopy. For 4 of the 10 physical therapy consults we reviewed for one VAMC, we found that between 108 and 152 days elapsed, with no apparent actions taken to schedule an appointment for the veteran. The patients’ files indicated that due to resource constraints, the clinic was not accepting consults for non-service-connected physical therapy evaluations.veteran was referred to non-VA care, and he was seen 252 days after the initial consult request. In the other 3 cases, the physical therapy clinic sent the consults back to the requesting provider, and the veterans did not receive care for that consult. In 1 of these cases, several months passed before the For all 10 of the cardiology consults we reviewed for one VAMC, we found that staff initially scheduled patients for appointments between 33 and 90 days after the request, but medical files indicated that patients either cancelled or did not show for their initial appointments. In several instances patients cancelled multiple times. In 4 of the cases VAMC staff closed the consults without the patients being seen; in the other 6 cases VAMC staff rescheduled the appointments for times that exceeded the 90-day timeframe. VAMC officials cited increased demand for services, patient no-shows, and cancelled appointments, among the factors that hinder their ability to meet VHA’s guideline for completing consults within 90 days. Several VAMC officials also noted a growing demand for both gastroenterology procedures, such as colonoscopies, as well as consultations for physical therapy evaluations, combined with a difficulty in hiring and retaining specialists for these two clinical areas, as causes of periodic backlogs in providing these services. Officials at these facilities indicated that they try to mitigate backlogs by referring veterans to non-VA providers for care. While officials indicated that use of non-VA care can help mitigate backlogs, several officials indicated that non-VA care requires more coordination between the VAMC, the patient, and the non-VA provider; can require additional approvals for the care; and also may delay obtaining the results of medical appointments or procedures. In addition, wait times are generally not tracked for non-VA care. As such, officials acknowledged that this strategy does not always prevent delays in veterans receiving timely care or in completing consults. Our ongoing review also has identified one consult for which the patient experienced delays in obtaining non-VA care and died prior to obtaining needed care. In this case, the patient needed endovascular surgery to repair two aneurysms – abdominal aortic and an iliac. According to the patient’s medical record, the timeline of events surrounding this consult was as follows: September 2013 – Patient was diagnosed with two aneurysms. October 2013 – VAMC scheduled patient for surgery in November, but subsequently cancelled the scheduled surgery due to staffing issues. December 2013 – VAMC approved non-VA care and referred the patient to a local hospital for surgery. Late December 2013 – After the patient followed up with the VAMC, it was discovered that the non-VA provider lost the patient’s information. The VAMC resubmitted the patient’s information to the non-VA provider. February 2014 – The consult was closed because the patient died prior to the surgery scheduled by the non-VA provider. According to VAMC officials, they conducted an investigation of this case. They found that the non-VA provider planned to perform the surgery on February 14, 2014, but the patient died the previous day. Additionally, they stated that according to the coroner, the patient died of cardiac disease and hypertension and that the aneurysms remained intact. Furthermore, our ongoing work shows that for nearly all of the consults where care had been provided within 90 days, an extended amount of time elapsed before specialty care providers completed them in the Specifically, for 28 of the 29 consults, even though care consult system.was provided, the consult remained open in the system, making it appear as though the requested care was not provided within 90 days. For one VAMC, we found that for all 10 cardiology consults we reviewed, specialty care providers did not properly document the results of the consults in order to close them in the system. In some cases, it took over 100 days from the time care was provided until the consults were completed in the system. Officials from several VAMCs told us that often specialty care providers do not choose the correct notes needed to document that the consults are complete. Officials attributed this ongoing issue in part to the use of residents, who rotate in and out of specialty care clinics after a few months and lack experience with completing consults. Officials from one VAMC told us that this requires VAMC leadership to continually train new residents on how to properly complete consults. To ensure that specialty care providers consistently choose the correct notes, this VAMC activated a prompt in its consult system asking each provider if the note the provider is entering is in response to a consult. Officials stated that this has resulted in providers more frequently choosing the correct note title to complete consults. Our ongoing work has identified variation in how the five VAMCs in our review have implemented key aspects of VHA’s business rules, which limits the usefulness of the data in monitoring and overseeing consults system-wide. As previously noted, VHA’s business rules were designed to standardize aspects of the consult process, thus creating consistency in VAMCs’ management of consults. However, we have found variation in how VAMCs are implementing certain tasks required by the business rules. For example, VAMCs have developed different strategies for managing future care consults—requests for specialty care appointments that are not clinically needed for more than 90 days. One task of the consult business rules required VAMCs to develop and implement strategies for requesting and managing requests for future care consults. Based on our ongoing work, we have identified that VAMCs are adopting various strategies when implementing this task,such as piloting an electronic system for providers to manage future care consults outside of the consult system and entering the consult regardless of whether the care was needed beyond 90 days. However, during the course of our ongoing work, several VAMCs told us they are changing their strategies for requesting and managing future care consults. For example, officials from a VAMC that was piloting an electronic system stated that, after evaluating the pilot, they decided not to use this approach, and are instead planning to implement markers to identify future care consults. These consults will appear in the consult data, but will be identified as future care consults and remain appropriately open until care is provided. Officials from two other VAMCs that were entering consults regardless of whether the care was needed beyond 90 days told us they are no longer doing this. According to officials, instead they are implementing a separate electronic system to track needed future care outside of the consult system, and these future care needs will not appear in consult data until they are entered in the consult system closer to the date the care is needed. Based on our discussions with VHA officials, it is not clear the extent to which they are aware of the various strategies that VAMCs are using to comply with this task. According to VHA officials, they have not conducted a system-wide review of the future care strategies and did not have detailed information on the various strategies specific VAMCs have implemented. Overall, our ongoing work indicates that oversight of the implementation of VHA’s consult business rules has been limited and has not included independent verification of VAMC actions. VAMCs were required to self- certify completion of each of the four tasks outlined in the business rules. VISNs were not required to independently verify that VAMCs appropriately completed the tasks. Without independent verification, however, VHA cannot be assured that VAMCs implemented the tasks correctly. Furthermore, our ongoing work shows that VHA did not require that VAMCs document how they addressed unresolved consults that were open greater than 90 days, and none of the five VAMCs in our ongoing review were able to provide us with specific documentation in this regard. VHA officials estimated that as of June 2014, about 278,000 consults (both clinical and administrative consults) remained unresolved system- wide. VAMC officials noted several reasons that consults were either completed or discontinued in this process of addressing unresolved consults, including improper recording of consult notes, patient cancellations, and patient deaths. At one of the VAMCs we reviewed, a specialty care clinic discontinued 18 consults the same day that a task for addressing unresolved consults was due. Three of these 18 consults were part of our random sample, and ongoing review has found no indication that a clinical review was conducted prior to the consults being discontinued. Ultimately, the lack of independent verification and documentation of how VAMCs addressed these unresolved consults may have resulted in VHA consult data that inaccurately reflected whether patients received the care needed or received it in a timely manner. Although VHA’s consult business rules were intended to create consistency in VAMCs’ consult data, our preliminary work has identified variation in managing key aspects of the consult process that are not addressed by the business rules. For example, there are no detailed system-wide VHA policies on how to handle patient no-shows and cancelled appointments, particularly when patients repeatedly miss appointments, which may make VAMCs’ consult data difficult to assess. For example, if a patient cancels multiple specialty care appointments, the associated consult would remain open and could inappropriately suggest delays in care. To manage this type of situation, one VAMC developed a local consult policy referred to as the “1-1-30” rule. The rule states that a patient must receive at least 1 letter and 1 phone call, and be granted 30 days to contact the VAMC to schedule a specialty care appointment. If the patient fails to do so within this time frame, the specialty care provider may discontinue the consult. According to VAMC officials, several of the consults we reviewed would have been discontinued before reaching the 90-day threshold if the 1-1-30 rule had been in place at the time. Furthermore, all of the VAMCs included in our ongoing review had some type of policy addressing patient no-shows and cancelled appointments, each of which varied in its requirements. VHA officials indicated that they allow each VAMC to develop their own approach to addressing patient no-shows and cancelled appointments. Without a standard policy across VHA addressing patient no-shows and cancelled appointments, however, VHA consult data may reflect numerous variations of how VAMCs handle patient no-shows and cancelled appointments. In December 2012, we reported that VHA’s reported outpatient medical appointment wait times were unreliable and that inconsistent implementation of VHA’s scheduling policy may have resulted in increased wait times or delays in scheduling timely outpatient medical appointments. Specifically, we found that VHA’s reported wait times were unreliable because of problems with recording the appointment desired date in the scheduling system. Since, at the time of our 2012 review, VHA measured medical appointment wait times as the number of days elapsed from the desired date, the reliability of reported wait time performance was dependent on the consistency with which VAMC schedulers recorded the desired date in the VistA scheduling system. However, VHA’s scheduling policy and training documents were unclear and did not ensure consistent use of the desired date. Some schedulers at VAMCs that we visited did not record the desired date correctly. For example, the desired date was recorded based on appointment availability, which would have resulted in a reported wait time that was shorter than the patient actually experienced. At each of the four VAMCs in our 2012 review, we also found inconsistent implementation of VHA’s scheduling policy, which impeded scheduling of timely medical appointments. For example, we found the electronic wait list was not always used to track new patients that needed medical appointments as required by VHA scheduling policy, putting these patients at risk for delays in care. Furthermore, VAMCs’ oversight of compliance with VHA’s scheduling policy, such as ensuring the completion of required scheduler training, was inconsistent across facilities. At that time, VAMCs also described other problems with scheduling timely medical appointments, including VHA’s outdated and inefficient scheduling system, gaps in scheduler and provider staffing, and issues with telephone access. For example, officials at all VAMCs we visited in 2012 reported that high call volumes and a lack of staff dedicated to answering the telephones affected their ability to schedule timely medical appointments. VA concurred with the four recommendations included in our December 2012 report and has reported continuing actions to address them. First, we recommended that the Secretary of VA direct the Under Secretary for Health to take actions to improve the reliability of its outpatient medical appointment wait time measures. In response, VHA officials stated that they implemented more reliable measures of patient wait times for primary and specialty care. In fiscal years 2013 and 2014, primary and specialty care appointments for new patients have been measured using time stamps from the VistA scheduling system to report the time elapsed between the date the appointment was created—instead of the desired date—and the date the appointment was completed. VHA officials stated that they made the change from using desired date to creation date based on a study that showed a significant association between new patient wait times using the date the appointment was created and self-reported patient satisfaction with the timeliness of VHA appointments.FY 2013 Performance and Accountability Report, reported that VHA completed 40 percent of new patient specialty care appointments within 14 days of the date the appointment was created in fiscal year 2013; in contrast, VHA completed 90 percent of new patient specialty care appointments within 14 days of the desired date in fiscal year VA, in its 2012. VHA also modified its measurement of wait times for established patients, keeping the appointment desired date as the starting point, and using the date of the pending scheduled appointment, instead of the date of the completed appointment, as the end date for both primary and specialty care. VHA officials stated that they decided to use the pending appointment date instead of the completed appointment date because the pending appointment date does not include the time accrued by patient no-shows and cancelled appointments. In a June 5, 2014 statement from the Acting Secretary, VA indicated that it is removing measures related to the 14-day performance goal from VISN and VAMC directors’ performance contracts. Second, we recommended that the Secretary of VA direct the Under Secretary for Health to take actions to ensure VAMCs consistently implement VHA’s scheduling policy and ensure that all staff complete required training. In response, VHA officials stated that the department was in the process of revising the VHA scheduling policy to include changes, such as the new methodology for measuring wait times, and improvements and standardization of the use of the electronic wait list. In March 2013, VHA distributed guidance, via memo, to VAMCs describing this information and also offered webinars to VHA staff on eight dates in April and May of 2013. In June 2014, VHA officials told us that they were in the process of further revising the scheduling policy, in part to reflect findings from VA’s system-wide access audit, and planned to issue a memo regarding new scheduling procedures at a future date. To assist VISNs and VAMCs in the task of verifying that all staff have completed required scheduler training, VHA has developed a database that will allow a VAMC to identify all staff that have scheduled appointments and the volume of appointments scheduled by each; VAMC staff can then compare this information to the list of staff that have completed the required training. However, as of June 2014, VHA officials have not established a target date for when this database would be made available for use by VAMCs. Third, we recommended that the Secretary of VA direct the Under Secretary for Health to take actions to require VAMCs to routinely assess scheduling needs for purposes of allocation of staffing resources. VHA officials stated that they are continuing to work on identifying the best methodology to carry out this recommendation, but stated that the database that tracks the volume of appointments scheduled by individual staff also may prove to be a viable tool to assess staffing needs and the allocation of resources. As of June 2014, VHA officials stated that they are continuing to address this recommendation including through internal and external discussions taking place in May and June 2014 regarding VHA scheduling policy. Finally, we recommended that the Secretary of VA direct the Under Secretary for Health to take actions to ensure that VAMCs provide oversight of telephone access, and implement best practices to improve telephone access for clinical care. In response, VHA required each VISN director to require VAMCs to assess their current telephone service against the VHA telephone improvement guide and to electronically post an improvement plan with quarterly updates. VAMCs are required to routinely update progress on the improvement plan. VHA officials cited improvement in telephone response and call abandonment rates since VAMCs were required to implement improvement plans. Additionally, VHA officials said that the department has contracted with an outside vendor to assess VHA’s telephone infrastructure and business process and was reviewing the findings from the first vendor report in June 2014. Although VA has initiated actions to address our recommendations, we believe that continued work is needed to ensure these actions are fully implemented in a timely fashion. Our findings regarding incorrect use of the desired date in the scheduling system and the electronic wait list are consistent with VHA’s recent findings from its system-wide access audit, indicating continued system-wide problems that could be addressed, in part, by implementing our recommendations. Furthermore, it is important that VA assess the extent to which these actions are achieving improvements in medical appointment wait times and scheduling oversight as intended. Ultimately, VHA’s ability to ensure and accurately monitor access to timely medical appointments is critical to ensuring quality health care to veterans, who may have medical conditions that worsen if access is delayed. Chairman Miller, Ranking Member Michaud, and Members of the Committee, this concludes my statement. I would be pleased to respond to any questions you may have. For further information about this statement, please contact Debra A. Draper at (202) 512-7114 or draperd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Key contributors to this statement were Bonnie Anderson, Assistant Director; Janina Austin, Assistant Director; Rebecca Abela; Jennie Apter; Jacquelyn Hamilton; David Lichtenfeld; Brienne Tierney; and Ann Tynan. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Access to timely medical appointments is critical to ensuring that veterans obtain needed medical care. Over the past few years, there have been numerous reports of VAMCs failing to provide timely care to veterans, and in some cases, these delays have reportedly resulted in harm to patients. As the number of these reports has grown, investigations have been launched by VA's Office of Inspector General and VA to examine VAMCs' medical appointment scheduling and other practices. In December 2012, GAO reported that improvements were needed in the reliability of VHA's reported medical appointment wait times, as well as oversight of the scheduling process. In May 2013, VHA launched the Consult Management Business Rules Initiative to standardize aspects of the consults process and develop system-wide consult data for monitoring. This testimony is based on GAO's ongoing work to update information previously provided to the Committee on April 9, 2014, including information on VHA's (1) process for managing consults; (2) oversight of consults; and (3) progress made implementing GAO's December 2012 recommendations. To conduct this work, GAO has reviewed documents and interviewed VHA officials. Additionally, GAO has interviewed officials from five VAMCs for the consults work and four VAMCs for the scheduling work that varied based on size, complexity, and location. GAO shared the information it used to prepare this statement with VA and incorporated its comments as appropriate. GAO's ongoing work examining the Department of Veterans Affairs' (VA) Veterans Health Administration's (VHA) process for managing outpatient specialty care consults has identified examples of delays in veterans receiving outpatient specialty care. GAO has found consults—requests for evaluation or management of a patient for a specific clinical concern—that were not processed in accordance with VHA timeliness guidelines. For example, consults were not reviewed within 7 days, or completed within 90 days. For 31 of the 150 consults GAO reviewed (21 percent), the consult records indicated that VA medical centers (VAMC) did not meet the 7-day review requirement. In addition, GAO found that veterans received care for 86 of the 150 consults (57 percent), but in only 28 of the consults (19 percent) was the care provided within 90 days. For the remaining 64 consults (43 percent), the patients did not receive the requested care. For 4 of the 10 physical therapy consults GAO reviewed for one VAMC, between 108 and 152 days elapsed with no apparent actions taken to schedule an appointment for the veteran. For 1 of these consults, several months passed before the veteran was referred for care to a non-VA health care facility. VAMC officials cited increased demand for services, and patient no-shows and cancelled appointments among the factors that lead to delays and hinder their ability to meet VHA's guideline of completing consults within 90 days of being requested. VA officials indicated that they may refer veterans to non-VA providers to help mitigate delays in care. GAO's ongoing work also has identified limitations in VHA's implementation and oversight of its new consult business rules designed to standardize aspects of the clinical consult process. Specifically, GAO has identified variation in how the five VAMCs reviewed have implemented key aspects of the business rules, such as strategies for managing future care consults—requests for specialty care appointments that are not clinically needed for more than 90 days. However, it is not clear the extent to which VHA is aware of the various strategies that VAMCs are using to comply with this task. Furthermore, oversight of the implementation of the business rules has been limited and does not include independent verification of VAMC actions. Because this work is ongoing, GAO is not making recommendations on VHA's consult process at this time. In December 2012, GAO reported that VHA's outpatient medical appointment wait times were unreliable and recommended that VA take actions to: (1) improve the reliability of its outpatient medical appointment wait time measures; (2) ensure VAMCs consistently implement VHA's scheduling policy, including the staff training requirements; (3) require VAMCs to routinely assess scheduling needs and allocate staffing resources accordingly; and (4) ensure that VAMCs provide oversight of telephone access, and implement best practices. As of June 2014, VA has reported ongoing actions to address these recommendations, but GAO found that continued work is needed to ensure these actions are fully implemented in a timely fashion. Ultimately, VHA's ability to ensure and accurately monitor access to timely medical appointments is critical to ensuring quality health care is provided to veterans, who may have medical conditions that worsen if care is delayed.
Intellectual property has a broad range—anywhere from inventions, to technological enhancements, to methods of doing business, to computer programs, to literary and musical works and architectural drawings. Government-sponsored research has an equally broad range—from research in mathematical and physical sciences, computer and information sciences, biological and environmental sciences, and medical sciences, to research supporting military programs of the Department of Defense (DOD) and the atomic energy defense activity of the Department of Energy. The objective of some of this research, for example, cancer research, is to gain more comprehensive knowledge or understanding of the subject under study, without specific application. According to the National Science Foundation, about 3 percent of DOD’s R&D funding and 41 percent of R&D funding by other agencies goes toward this type of study. Other research is directed at either gaining knowledge to meet a specific need or to develop specific materials, devices, or systems—such as a weapon system or the International Space Station. About 97 percent of DOD’s R&D dollars and 55 percent of R&D dollars from other agencies supports applied research. The primary vehicles for funding research efforts are grants, cooperative agreements, and contracts. Today, our focus is largely on intellectual property rights that the government acquires through research done under contracts, which primarily fund applied research. As illustrated in the figure below, the R&D landscape has changed considerably over the past several decades. While the federal government had once been the main provider of the nation’s R&D funds, accounting for 54 percent in 1953 and as much as 67 percent in 1964, as of 2000, its share amounted to 26 percent, or about $70 billion, according to the National Science Foundation. Patents, trademarks, copyrights, and trade secrets protect intellectual property. Only the federal government issues patents and registers copyrights, while trademarks may also be registered by states that have their own registration laws. State law governs trade secrets. Anyone who uses the intellectual property of another without proper authorization is said to have ‘infringed’ the property. Traditionally, an intellectual property owner’s remedy for such unauthorized use would be a lawsuit for injunctive or monetary relief. Prior to 1980, the government generally retained title to any inventions created under federal research grants and contracts, although the specific policies varied among agencies. Over time, this policy increasingly became a source of dissatisfaction. First, there was a general belief that the results of government-owned research were not being made available to those who could use them. Second, advances attributable to university-based research funded by the government were not pursued because the universities had little incentive to seek use for inventions to which the government held title. Finally, the maze of rules and regulations and the lack of a uniform policy for government-owned inventions often frustrated those who did seek to use the research. The Bayh-Dole Act was passed in 1980 to address these concerns by creating a uniform patent policy for inventions resulting from federally sponsored research and development agreements. The act applied to small businesses, universities, and other nonprofit organizations and generally gave them the right to retain title to and profit from their inventions, provided they adhered to certain requirements. The government retained nonexclusive, nontransferable, irrevocable, paid-up (royalty-free) licenses to use the inventions. A presidential memorandum issued to the executive branch agencies on February 18, 1983, extended the Bayh-Dole Act to large businesses. It extended the patent policy of Bayh-Dole to any invention made in the performance of federally funded research and development contracts, grants, and cooperative agreements to the extent permitted by law. On April 10, 1987, the president issued Executive Order 12591, which, among other things, required executive agencies to promote commercialization in accordance with the 1983 presidential memorandum. Below are highlights of requirements related to the Bayh-Dole Act and Executive Order 12591. In addition to the traditional categories of intellectual property protections, government procurement regulations provide a layer of rights and obligations known as “data rights.” These regulations describe the rights that the government may obtain to two types of data, computer software and technical data, delivered or produced under a government contract. These rights may include permission to use, reproduce, disclose, modify, adapt, or disseminate the technical data. A key feature of the DOD framework for data rights, and one implicit in the civilian agency framework, is that the extent of the government’s rights is related to the degree of funding the government is providing. In some cases, the government may decide that it is in its best interest to forgo rights to technical data. For example, if the government wants to minimize its costs of having supercomputers developed exclusively for government use, it could waive its rights in order to spur commercial development. At the same time, situations arise where the government has a strong interest in obtaining and retaining data rights—either unlimited rights or government-purpose rights. These include long-term projects, such as cleanup at nuclear weapon sites, where the government may want to avoid disrupting the program if a change in contractors occurs. These also include projects that affect safety and security. For example, the Transportation Security Administration recently purchased the data rights for an explosives detection system manufactured by one company. The agency believed data rights were necessary in order to expand production of these machines and meet the congressionally mandated deadline for creating an explosives detection capability at airports. We contacted multiple agencies responsible for $191 billion or 88 percent of federal procurements in fiscal year 2001. At these agencies, we met with those officials responsible for procurement, management and oversight of contractor-derived intellectual property. We also analyzed agency and industry studies as well as agency guidance and requirements. In addition, we met with representatives from (1) commercial enterprises that either contract with the government or develop technologies of interest to the government as well as (2) associations representing commercial firms doing business with the government. Both industry and agency officials covered by our review had concerns about the effectiveness and the efficiency of successfully negotiating contracts with intellectual property issues. These concerns include a lack of good planning and expertise within the government and industry’s apprehensions over certain government rights to data and inventions as well as the government’s ability to protect proprietary data. Industry officials were particularly concerned about the span of rights the government wants over technical data. Industry officials asserted that rather than making a careful assessment of its needs, some contracting officers wanted to operate in a “comfort zone” by asking for unlimited rights to data, even when the research built on existing company technology. This was disconcerting to potential contractors because it meant that the government could give data to anyone it chose, including potential competitors. Some companies mentioned specific instances in which they delayed or declined participation in government contracts. These situations occurred when companies believed their core technologies would be at risk and the benefits from working with the government did not outweigh the risk of losing their rights to these technologies. Most agency officials said that intellectual property issues were at times hotly contested and could become the subject of intense negotiations. While agency officials indicated that problems related to intellectual property rights may have limited access to particular companies, they did not raise or cite specific instances where the agency was unable to acquire needed technology. In some situations, agencies exerted flexibility to overcome particular concerns and keep industry engaged in research efforts. DOD officials viewed intellectual property requirements and the manner in which these requirements are implemented as significantly affecting their ability to attract leading technology firms to DOD research and development activities. This concerns DOD, which believes it needs to engage leading firms in joint research efforts in order to promote development of commercial technologies that meet military needs. Last, agency officials, particularly DOD officials, voiced concerns about having access to technical data necessary to support and maintain systems over their useful life as well as the ability to procure some systems competitively, especially smaller systems. These officials stated that if they did not obtain sufficient data rights, they could not use competitive approaches to acquire support functions or additional units. We have reported on the difficulties that occurred when appropriate data rights were not obtained. In one instance, when the Army tried to procure data rights later in the system’s life cycle, the manufacturer’s price for the data was $100 million—almost as much as the entire program cost ($120 million) from 1996 through 2001. We have recommended, among other things, that DOD place greater emphasis on obtaining priced options for the purchase of technical data at the time proposals for new weapon systems are being considered—when the government’s negotiating leverage is the greatest. Agency officials we spoke with generally agreed that some actions could be taken to address concerns about limited awareness of flexibilities and expertise without any legislative changes. Specifically, agencies could promote greater use of the flexibilities already available to them. DOD, for example, is advocating greater use of its “other transaction authority.” This authority enables DOD to enter into agreements that are generally not subject to the federal laws and regulations governing standard contracts, grants, and cooperative agreements. By using this authority, where appropriate, DOD can increase its flexibility in negotiating intellectual property provisions and attract commercial firms that traditionally did not perform research for the government. A second example of agency flexibility to address industry concerns over the allocation of rights under the Bayh-Dole Act is a form of waiver, known as a determination of exceptional circumstances. This waiver has been used, for example, to work out intellectual property rights between pharmaceutical companies and universities or other firms. In these cases, pharmaceutical companies provide compounds that NIH tests to identify whether these compounds are effective in treating additional diseases or ailments. Universities and other commercial firms perform these tests. The exceptional circumstances determination allows the pharmaceutical companies to retain the intellectual property rights to any discoveries coming out of these tests, rather than the performer of the tests. An NIH official explained that a determination of exceptional circumstances could be made in these cases because the program would not exist in the absence of such a determination. Agencies could also strengthen advance planning on data requirements. For example, attention needs to be paid to what types of maintenance or support strategies will be pursued and what data rights are needed to support alternative strategies. Also, consideration could be given to obtaining priced options for the purchase of data rights that may be needed later.
Improperly defined intellectual property rights in a government contract can result in the loss of an entity's critical assets or limit the development of applications critical to public health or safety. Conversely, successful contracts can spur economic development, innovation, and growth, and dramatically improve the quality of delivered goods and services. Contracting for intellectual property rights is difficult. The stakes are high, and negotiating positions are frequently ill-defined. Moreover, the concerns raised must be tempered with the understanding that government contracting can be challenging even without the complexities of intellectual property rights. Further, contractors often have reasons for not wanting to contract with the government, including concerns over profitability, capacity, accounting and administrative requirements, and opportunity costs. Within the commercial sector, companies identified a number of specific intellectual property concerns that affected their willingness to contract with the government. These included perceived poor definitions of what technical data is needed by the government, issues with the government's ability to protect proprietary data adequately, and unwillingness on the part of government officials to exercise the flexibilities available concerning intellectual property rights. Some of these concerns were on perception rather than experience, but, according to company officials, they nevertheless influence decisions not to seek contracts or collaborate with federal government entities. Agency officials shared many of these concerns. Poor upfront planning and limited experience/expertise among the federal contracting workforce were cited as impediments. Although agency officials indicated that intellectual property rights problems may have limited access to particular companies, they did not cite specific instances where the agency was unable to acquire needed technology. Agency officials said that improved training and awareness of the flexibility already in place as well as a better definition of data needs on individual contracts would improve the situation.
Nationwide, more than 16,000 POTWs serve more than 200 million people, or about 70 percent of the nation’s total population. The remaining population is served by privately-owned utilities or by on-site systems, such as septic tanks. A relative handful of large wastewater systems serve the great majority of people, as about 500 large public wastewater systems provide service to 62 percent of the population connected to a sewer system. In addition to serving residential populations, approximately 27,000 commercial and industrial facilities rely on wastewater treatment facilities to treat their wastewater. POTWs discharge treated effluent into receiving waters and are regulated under the Clean Water Act. Wastewater systems vary by size and other factors, but all include a collection system and a treatment facility. The collection system is the underground network of sewers including both sanitary and storm water collection lines. Collection systems tend to be dispersed geographically and have multiple access points, including drains, catch basins, and manholes. Lines may range from 4 inches to greater than 20 feet in diameter, and access is usually conducted through manholes that are typically 300 feet apart. Many collection systems rely on gravity to maintain the flow of sewage through the pipes toward the treatment plant. However, collection systems may also depend on pumping stations to propel the flow when gravity alone is insufficient. Nationwide, there are approximately 800,000 miles of sewer lines and 100,000 major pumping stations. The wastewater treatment facility receives wastewater from the collection system and begins the treatment process which typically involves several stages before treated effluent is released into receiving waters. Primary treatment includes removal of larger objects through a screening device or a grit removal system, and the removal of solids through sedimentation. Secondary stage treatment includes a biological process that consumes pollutants, as well as final sedimentation. Some facilities also use tertiary treatment to remove nutrients and other matter even further. Following these treatments, the wastewater is disinfected to destroy harmful bacteria and viruses. Disinfection is often accomplished with chlorine, which is stored in gaseous or liquid form on-site at the wastewater treatment plant. The collection system and treatment process is typically monitored and controlled by a Supervisory Control and Data Acquisition (SCADA) system, which allows utilities to control such things as the amount of chlorine needed for disinfection. Wastewater treatment facilities may possess certain characteristics that terrorists could exploit either to impair the wastewater treatment process or to damage surrounding communities and infrastructure. For example, the numerous storm drains, manholes, and sewers that make up a community’s wastewater collection system’s network of sewers could be used to covertly place explosives beneath a major population center or to introduce substances that may damage a wastewater treatment plant’s process. Damage to (or destruction of) tanks that hold large amounts of gaseous chlorine used to disinfect wastewater could release the potentially lethal gas into the atmosphere. Such events could result in loss of life, destruction of property, and harm to the environment. Documented accidents and intentional acts highlight the destruction that could arise from an attack on a wastewater system. In June 1977 in Akron, Ohio, an intentional release of naptha, a cleaning solvent, and alcohol into a sewer by vandals at a rubber manufacturing plant caused explosions 3.5 miles away from the plant, damaging about 5,400 feet of sewer line and resulting in more than $10 million in damage. In 1981 in Louisville, Kentucky, thousands of gallons of a highly flammable solvent, hexane, spilled into the sewer lines from a local processing plant. Fumes from the solvent ignited, and the resulting explosions collapsed a 12-foot diameter pipe and damaged more than 2 miles of streets. No one was seriously injured, but sewer line repairs took 20 months, followed by several more months to repair the streets. In 1992 in Guadalajara, Mexico, a gasoline leak into a sewer caused explosions that killed 215 people, injured 1,500 others, damaged 1,600 buildings, and destroyed 1.25 miles of sewer. In 2002 in Hagerstown, Maryland, chemicals from an unknown source entered the wastewater treatment plant and destroyed the facility’s biological treatment process. The event resulted in the discharge of millions of gallons of partially treated sewage into a major tributary of the Potomac River, less than 100 miles from a water supply intake for the Washington, D.C., metropolitan area. In January 2005, we reported the views of 50 nationally recognized experts on key issues concerning wastewater security. Our panel of experts identified five key wastewater assets as most vulnerable to terrorist attacks: the collection systems’ network of sewers (42 of 50 experts), treatment chemicals (32 of 50 experts), key components of the treatment plant (29 of 50 experts), control systems (18 of 50 experts), and pumping stations (16 of 50 experts). When asked to identify and set priorities for the security-enhancing activities most deserving of federal support, the expert panel identified 11 key actions, but ranked three as deserving highest priority—replacing gaseous chemicals used in the wastewater treatment process; improving local, state, and regional efforts to coordinate responses in advance of a potential terrorist threat; and completing vulnerability assessments for individual wastewater systems. Federal law does not address wastewater security as comprehensively as it does drinking water security. In particular, wastewater facilities are not required by law to complete vulnerability assessments. The Clean Air Act does require wastewater facilities using certain amounts of hazardous substances, such as chlorine gas, to submit to EPA a risk management plan that lays out accident prevention and emergency response activities. Also, under EPA guidance, the Clean Water State Revolving Fund can be used in many instances for certain wastewater system security enhancements. While federal law governing wastewater security is limited, in December 2003, the president issued HSPD-7. The directive designated EPA as the lead agency to oversee the security of the water sector, including both drinking water and wastewater critical infrastructures. In 2002, Congress passed the Bioterrorism Act, which amended various laws, including the Safe Drinking Water Act. The Bioterrorism Act required drinking water systems serving more than 3,300 people to complete vulnerability assessments of their facilities by June 2004 and to prepare or update an existing emergency response plan. The Bioterrorism Act required the assessments to include, but not be limited to, a review of six components: (1) pipes and constructed conveyances; (2) physical barriers; (3) water collection, pretreatment, treatment, storage, and distribution facilities; (4) electronic, computer, or other automated systems which are utilized by the public water system; (5) the use, storage, or handling of various chemicals; and (6) the operation and maintenance of such systems. Under the act, the emergency response plans were to include plans, procedures, and identification of equipment to lessen the impact on public health and the drinking water supply of terrorist attacks or other intentional acts against drinking water systems. The act authorized $210 million for fiscal year 2002, mostly to assist drinking water systems in completing vulnerability assessments, preparing or updating response plans, and making needed security improvements. Drinking water systems are not required to implement any risk-reduction actions based on their vulnerability assessments or report to EPA on measures that have been implemented. In 2003, the Congress considered alternative bills that would have encouraged or required wastewater treatment plants to assess the vulnerability of wastewater facilities, make physical security improvements, and conduct research. However, the legislation did not become law and, consequently, no such requirement or specific funding exists for wastewater facilities. While federal law does not require wastewater systems to take security measures to protect specifically against a terrorist attack, it does require certain wastewater facilities to take security precautions that could mitigate the consequences of such an attack. For example, the 1990 Clean Air Act amendments mandated EPA oversight of risk management planning at facilities that handle more than specified-threshold quantities of hazardous substances, including the gaseous chlorine often used as a disinfectant at wastewater facilities. Specifically, EPA regulations implementing the Clean Air Act require these facilities to prepare Risk Management Plans (RMPs) that summarize the potential threat of sudden, accidental, large releases of certain chemicals; including the results occurring off-site in a worst-case chemical accident, and the facility’s plan to prevent releases and mitigate any damage. RMPs are to be revised and resubmitted to EPA at least every 5 years, and EPA is required to review them and require revisions, if necessary. For a March 2003 report, EPA told us it believed the Clean Air Act could be interpreted to provide authority to address site security from terrorist attacks at RMP facilities, because the act imposes certain requirements on these facilities regarding “accidental releases.” The act defines an accidental release as an unanticipated emission of a regulated substance or other extremely hazardous substance into the air, so any chemical release caused by a terrorist attack could be considered “unanticipated” and covered under the Clean Air Act. Such an interpretation would provide EPA with authority under the act’s RMP provisions and general duty clause to require security measures or vulnerability assessments with regard to terrorism. However, EPA has not attempted to use these Clean Air Act provisions because it is concerned that such an interpretation would pose significant litigation risk and has concluded that chemical facility security would be more effectively addressed by passage of specific legislation. Wastewater facilities that store certain amounts of hazardous chemicals may also be subject to the Resource Conservation and Recovery Act. Under regulations implementing the act, facilities that house hazardous waste generally must take certain security actions, such as posting warning signs and using a 24-hour surveillance system, or surrounding the active portion of the facility with a barrier and controlled entry gates. However, according to EPA, these security measures are aimed at keeping out trespassers or wanderers, not intentional intruders. Other federal statutes impose safety requirements on certain wastewater facilities that may incidentally reduce the likelihood and mitigate the consequences of terrorist attacks. For example, the Occupational Safety and Health Act imposes a number of safety requirements, including a general duty to furnish a workplace free from recognized hazards that may cause death or serious physical harm to employees. The Emergency Planning and Community Right-to-Know Act requires owners of facilities that maintain specified quantities of certain extremely hazardous chemicals to submit information annually on their chemical inventory to state and local emergency response officials. The act also requires that each state establish a State Emergency Response Commission to oversee local emergency planning and create local emergency planning committees. These committees must develop and periodically review their communities’ emergency response plans, including the identification of chemical facilities, and outline procedures for response personnel to follow in the event of a chemical incident. Aside from statutes that address some areas of wastewater security, EPA has asserted that federal funding is available for wastewater security- related measures through the Clean Water State Revolving Fund (CWSRF) program. The CWSRF is an EPA-administered program that provides grants to the states to fund a variety of water-quality projects, including those at municipal wastewater treatment facilities. States may use the funds to provide loans to local governments to assist wastewater utilities in making infrastructure improvements needed to protect public health and ensure compliance with the Clean Water Act. According to EPA, states may use the CWSRF to assist utilities in completing a variety of security-related actions, such as vulnerability assessments, contingency plans, and emergency response plans. In addition, EPA has identified other infrastructure improvements that may be eligible for funding, such as the conversion from gaseous chemicals to alternative treatment processes, installation of fencing or security cameras, securing large sanitary sewers, and installing tamper-proof manholes. In our January 2005 report summarizing experts’ views on wastewater security, a number of experts expressed caution about relying heavily on the CWSRF program to support security enhancements, largely because of the time-lag in obtaining funds for security-related measures, and because such demands on the CWSRF would divert needed funding away from the kind of critical infrastructure investments that are the CWSRF program’s primary purpose. Another source of federal funding potentially available for wastewater security-related measures is the State Homeland Security Grant Program administered by DHS. This program’s primary objectives are to enhance the capacity of state and local emergency responders to prevent, protect against, respond to, and recover from terrorist incidents involving chemical, biological, radiological, nuclear, and explosive devices; agriculture; and cyber attacks. Under the program, grants are provided to states for a variety of purposes, including homeland security-related training and protection of critical infrastructure, although authority to make physical security improvements is limited. States are required to allocate at least 80 percent of these grant funds to “local units of governments,” which, as defined in the conference report accompanying the Department of Homeland Security Appropriations Act for fiscal year 2006, include water districts, special districts, and other political subdivisions of a state. In December 2003, the president issued HSPD-7, which established a national policy for federal departments and agencies to identify and set priorities for the nation’s critical infrastructures and to protect them from terrorist attacks. HSPD-7 established EPA as the lead federal agency to oversee the security of the water sector, both drinking water and wastewater. Presidential Decision Directive 63 had done so earlier in May 1998, with a focus primarily on water supply. Under HSPD-7, EPA is responsible for (1) identifying, prioritizing, and coordinating infrastructure protection activities for the nation's drinking water and water treatment systems; (2) working with federal departments and agencies, state and local governments, and the private sector to facilitate vulnerability assessments; (3) encouraging the development of risk management strategies to protect against and mitigate the effects of potential attacks on critical resources; and (4) developing mechanisms for information sharing and analysis. HSPD-7 also called for DHS to integrate all critical infrastructure security efforts among federal agencies and to complete a comprehensive national plan for critical infrastructure and key resource protection—now called the National Infrastructure Protection Plan. Under HSPD-7, seven federal agencies, including EPA, were designated sector-specific agencies. DHS issued guidance tasking each sector-specific agency with developing sector-specific plans for input into the comprehensive plan. Each sector- specific plan is supposed to outline strategies for (1) collaborating with all relevant federal departments and agencies, state and local governments, and the private sector; (2) identifying assets; (3) conducting or facilitating vulnerability assessments; and (4) encouraging risk management strategies to protect against and mitigate the effects of an attack. The water sector- specific plan will be an appendix to the National Infrastructure Protection Plan. On January 20, 2006, DHS issued its revised National Infrastructure Protection Plan based on comments it received on an earlier version of the plan. DHS accepted additional comments on the revised version until February 6, 2006, and expects to issue a final version of the plan later in 2006. Sector-specific agencies are required to submit their sector-specific plans to DHS within 6 months after the National Infrastructure Protection Plan is made final. Our survey of large wastewater facilities indicates that many have taken steps to improve security. Most facilities that responded to our survey have completed, have under way, or plan to complete some type of security assessment. Roughly two-thirds of facilities also reported they used a disinfectant other than gaseous chlorine or plan to switch from the gas. Of those facilities that continue to use gaseous chlorine, many have taken steps to increase security by limiting and monitoring access to gaseous chlorine storage areas or through other actions. Survey responses show that since 9/11, wastewater treatment facilities have also focused security efforts on controlling and limiting access to their treatment plants. Importantly, facilities have taken fewer security actions intended to protect treatment collection systems. Many facilities reported that taking other measures to protect their treatment plants, including converting from gaseous chlorine to a safer disinfection process, took priority over protecting infrastructure in their collection systems. Survey results show a lack of funding and federal security guidelines remain a concern for many wastewater facility managers. Seventy-four percent of facilities that responded to our survey reported they completed, were in the process of completing, or planned to complete some type of security assessment—either a vulnerability assessment, similar to that which was required of drinking water facilities under the Bioterrorism Act, or another type of security assessment. As shown in figure 1, 106 facilities—or 51 percent of those responding to our survey— indicated that they had completed a vulnerability assessment or were currently conducting a vulnerability assessment. Of the 106 facilities that indicated they had either completed a vulnerability assessment or had one under way, 80 indicated their vulnerability assessments were complete, while 26 indicated the assessment was still in process. As shown in the figure, 22 facilities—or 11 percent of all responses—indicated they had conducted another type of security assessment or were in the process of conducting another type of security assessment, while 24 facilities—or 12 percent of all responses—indicated they plan to conduct either a vulnerability or another type of security assessment. Twenty-three facilities—or 11 percent of total responses—indicated they had no plans to conduct any type of security assessment. When asked to identify reasons for not conducting a vulnerability or security assessment, 17 of these 23 facilities cited a lack of requirement to do so, while 15 noted that they considered security actions taken at their facilities adequate for their security needs. Thirteen of these facilities indicated that their emergency response plan was updated and this seemed sufficient to address potential vulnerabilities. Facilities cited several reasons for completing a vulnerability or some other type of security assessment, but most—roughly 77 percent—reported doing so on their own initiative. Thirty-seven percent of facilities reported that they did so in conjunction with the required assessment for their drinking water facility. To a lesser extent, facilities cited state, local, and utility governing-body requirements as reasons they conducted assessments. See appendix II for survey results related to vulnerability and security assessments at large wastewater facilities. As shown in figure 2, over half of large wastewater facilities in our survey reported they use an alternative to gaseous chlorine in their disinfection process. These results are consistent with studies which conclude that over the past decade, wastewater treatment facilities have moved away from gaseous chlorine as a disinfectant. Of the facilities not using gaseous chlorine, 89 reported using sodium hypochlorite as their primary disinfectant. Sodium hypochlorite is essentially a strong version of household bleach and is considered safer than gaseous chlorine. Seventeen facilities report they are using ultraviolet light as their primary disinfectant. The remaining facilities did not identify the type of disinfectant method used at their facility. In our January 2005 report, we noted that the change, for an individual plant, to sodium hypochlorite may require approximately $12.5 million for new equipment and increase annual chemical costs from $600,000 for gaseous chlorine to over $2 million for sodium hypochlorite. However, one expert noted some costs may be offset through savings in regulatory paperwork and certain emergency planning efforts. In our survey, we asked facilities that switched from gaseous chlorine if their annual costs increased, stayed the same, or decreased after switching to an alternate disinfection method. Fifty-eight facilities reported that costs increased, 11 noted that costs have stayed about the same, and one facility reported that costs decreased. Of the 85 facilities that reported use of gaseous chlorine, 20—or roughly 10 percent of all 206 reporting facilities—indicated that they have plans to switch from gaseous chlorine to another disinfectant. In addition, as shown in figure 3, many reported taking additional steps after 9/11 to mitigate the potential risks associated with continued reliance on chlorine. Forty-one facilities using gaseous chlorine reported that they instituted controls for selective access to chlorine storage areas after 9/11, while 30 facilities reported making other security improvements to the storage area, such as installing electronic surveillance of the chlorine storage area or improving gates and fencing. Fewer facilities reported that they decided to store gaseous chlorine in smaller-quantity containers, likely because most reported they already stored the gas in one-ton containers, which are among the smallest containers used at large wastewater facilities for the gas. See appendix II for survey results on gaseous chlorine use at large wastewater facilities. As shown in figure 4, many facilities reported taking basic security measures prior to 9/11, such as installing vehicle gates and security fencing. Survey respondents also indicated that many information technology security measures, such as virus protection programs, backup power supplies, and firewall and intrusion detection systems, were implemented before 9/11. The figure shows that security enhancements made or planned by large wastewater facilities after 9/11 generally focus on controlling access to the treatment plant. Such security enhancements include adding visual surveillance monitoring, increasing security lighting, implementing employee and visitor identification policies, adding guard stations, and upgrading SCADA capability and security. Importantly, few facilities reported taking measures to address collection system vulnerabilities other than having available redundant pumping devices or collection bypass systems. For example, few have installed or plan to install manhole intrusion sensors, manhole locks, or sensors to detect toxics or other biochemical threats to their collection systems. This lack of attention to collection system vulnerabilities is important because 42 of the 50 experts polled in our January 2005 report on wastewater security identified the collection systems’ network of sanitary, storm, and combined sewers as the most vulnerable asset of a wastewater utility. Several noted that sewers make underground travel from a point of entry to a potential target almost undetectable, possibly allowing sewers to be used as an underground transport system for explosive or toxic agents. Many facilities reported that other measures to protect their treatment plants, including converting from gaseous chlorine to a safer disinfection process, took priority over protecting infrastructure in their collection systems. Other managers cited the difficulty and expense in securing collection systems that, by nature, cover a large area and have many, often remote, access points. One manager expressed confusion about whether to concentrate monitoring resources on large interceptor sewer lines to prevent entry or on toxic materials that could be introduced at nearly every access point to his system. Others noted the lack of facility control over collection systems. One facility manager told us his facility treats wastewater that is collected from 17 separate collection systems. Finally, a number of respondents questioned whether the technologies purportedly available to detect potential threats introduced to collection systems are sufficiently capable of achieving this objective. Nonetheless, a few facility managers with whom we spoke told us they have made efforts to address collection system security, particularly in the protection of their pump stations. One facility manager told us his facility has a project under way to install security locks and card-access controls at all 93 of its pumping stations. According to the manager, the concentration of and need to protect capital equipment, and the potential impact of damage or destruction of that infrastructure prompted the facility to direct its capital improvement efforts to securing pumping stations. While many facilities in our survey indicated they made some security improvements after 9/11, facility managers cited limited resources and other priorities as reasons for not implementing further security measures. Facility managers and other industry experts with whom we spoke noted that security upgrades must compete with other infrastructure needs for available resources. For instance, many wastewater facilities’ collection systems are outdated, and they are already facing large costs to expand and repair their aging systems and reduce incidences of combined sewer overflows. Major U.S. cities, such as Washington, D.C., and Cincinnati, Ohio, are facing costs between $1 and $2 billion to implement necessary capital improvements. See appendix II for survey results on physical, personnel, and information technology security measures taken at large wastewater facilities. In our survey, we asked wastewater facility managers what the federal government could do to improve security at wastewater facilities. Facility manager responses are categorized in table 1. Facility managers predominantly recommended additional funding to further wastewater security improvements. Many facility managers recommended targeting funding to specific measures, such as performing vulnerability assessments, purchasing specific security equipment such as surveillance cameras, or covering costs associated with switching from gaseous chlorine to a safer disinfectant. To a much lesser extent, wastewater facility managers commented that the federal government could be of greater assistance in providing security guidance, standards, and best practices. For example, one facility manager we interviewed expressed a need for federal guidance and best practices on collection system security. For its part, in 2002, EPA provided funding to the American Society of Civil Engineers (ASCE) to develop a set of security guidance documents that cover the design of online contaminant monitoring systems, and physical security enhancements of drinking water, wastewater, and storm water infrastructure systems. ASCE sub-contracted with American Water Works Association and the Water Environment Federation (WEF) for assistance on this project. In 2004 these documents were released as interim voluntary security design standards for the water sector and finalized standards are to be established in late 2006 or early 2007. These security-focused documents are intended to serve as a foundation to help water utilities address potential vulnerabilities through sound design, construction, and operation and maintenance practices. According to a WEF representative, one set of standards is to be directed at physical security measures for wastewater collection systems. The security standards are to be published in late 2006 and are to include both prescriptive and performance-based criteria that focus on physical security upgrades that reduce risk to water, wastewater, and storm water infrastructure arising from malevolent events. EPA and DHS have a number of initiatives under way related to wastewater facility security. For example, EPA has funded programs to develop vulnerability assessment tools and provide training to wastewater facilities on the use of these tools, while DHS has conducted site assessment visits at wastewater facilities. While these initiatives are helping to address security concerns in the wastewater sector, EPA and DHS efforts could nonetheless be more effective with greater coordination over how best to convey security-related and threat information to the wastewater treatment community. Since 2002, EPA has provided more than $10 million to help address the security needs of the wastewater sector. EPA funded the development and dissemination of several risk assessment methodologies to assist water sector utilities in identifying how to better protect their critical infrastructures. In addition, EPA funded training for wastewater utilities on how to conduct risk assessments and update or complete emergency response plans. EPA provided funding to the Association of Metropolitan Sewerage Agencies to develop a software tool, called the Vulnerability Self Assessment Tool (VSAT), for drinking water utilities. In addition, through an interagency agreement with EPA, the Department of Energy’s Sandia National Laboratories provided training to selected firms in a vulnerability assessment methodology developed by the labs, called the Risk Assessment Methodology for Water Utilities (RAM-W). For vulnerability assessments at smaller water systems, EPA supported the dissemination of the Security and Emergency Management System (SEMS) software tool. Sixty-nine wastewater facilities responding to our survey indicated they used, were currently using, or planned to use the VSAT software to complete a vulnerability or security assessment; 27 facilities indicated they either used, were currently using, or planned to use the RAM-W assessment tool. Another four facilities indicated they either used, were currently using, or planned to use the SEMS software. EPA has also reorganized its own internal structure and sought input from experts outside of the agency to better assist the wastewater industry’s security efforts. In particular, in 2003, EPA created a Water Security Division to work with the states, tribes, drinking water and wastewater utilities, and other partners to enhance the security of water and wastewater utilities and the ability to respond effectively to security threats and breaches. In addition, in 2004, the National Drinking Water Advisory Council (NDWAC), at EPA’s request, established a Water Security Working Group made up of 16 members from wastewater utilities, drinking water utilities, and environmental and rate-setting organizations to advise on the development of best security practices and policies for water utilities. The group advises the NDWAC on ways to address several specific security needs of the sector. In June 2005, the working group provided NDWAC with a report that identified features of an active and effective security program and ways to measure the adoption of these practices. As noted, EPA provided funding to ASCE to develop a set of security guidance documents that cover the design of online contaminant monitoring systems, and physical security enhancements of drinking water, wastewater, and storm water infrastructure systems. This effort, called the Water Infrastructure Security Enhancement project, is to address physical infrastructure security needs in the water sector by issuing guidance documents, training materials, and voluntary standards relating to water infrastructure security. The project group is currently developing physical security standards that focus on physical security upgrades to reduce risk to water, wastewater, and storm water arising from malevolent acts. For its part, DHS has two broad initiatives that have facilitated efforts to improve wastewater security. First, the Buffer Zone Protection program is a DHS grant program designed to reduce specific vulnerabilities at a critical infrastructure or key resource site by assisting local law enforcement to develop a plan for preventative and protective measures that make it more difficult for terrorists to plan or launch attacks from the immediate vicinity of the site. They also identify equipment that could be purchased to mitigate the vulnerabilities. Upon plan approval, DHS grants funds for procuring materials and equipment necessary for implementation of the site’s buffer zone protection plan. According to DHS, as of October 31, 2005, security at 14 wastewater facilities has been reviewed under the Buffer Zone Protection program. Under its second broad initiative, the Site Assistance Visits program, DHS visits critical infrastructure sites nationwide to address key areas of concern at facilities requiring security enhancements. DHS subject matter experts in the areas of physical security measures, system interdependencies, and terrorist attack prevention conduct these visits— generally lasting 1 to 3 days—in which, among other things, the vulnerabilities of the site or facility are identified and mitigation options are discussed. According to DHS, as of October 31, 2005, a total of 350 site assessment visits have been conducted. Of this total, seven were conducted with wastewater facilities. In addition to these programs, DHS funded a NACWA project to develop a decision tree and report template to help water systems assess and examine chlorine gas alternatives for water and wastewater disinfection. The decision tree guides water systems in evaluating the potential costs and benefits of conversion and determining whether an alternative disinfection method will still enable them to meet their permit requirements. The report template is to ensure that the results of the decision tree analysis are reported in a consistent format, improving a water system’s ability to pursue and secure any available state or federal funding for conversion. According to a NACWA representative, they are in the process of finishing the design of the decision tool and, once the final product is reviewed and approved by DHS, printing of the CD tool will begin. NACWA expects to make the tool available to water and wastewater utilities free of charge no later than the end of March 2006. While EPA and DHS have these wastewater security-related initiatives under way, the Congress has expressed concerns that EPA’s homeland security responsibilities are not well articulated in relation to DHS’ responsibilities. In the conference report for the fiscal year 2005 Consolidated Appropriations Act, conferees directed EPA to enter into a memorandum of understanding (MOU) with DHS that defines the relationship and responsibilities of the two entities regarding homeland security and protection. EPA did not enter into the MOU, but instead, on November 1, 2005, issued a report to the Congress entitled “Homeland Security Roles and Responsibilities and Interactions Between EPA and the Department of Homeland Security.” The report identified the homeland security-specific authorities, core mission authorities, presidential directives, and existing MOUs EPA uses to implement its homeland security roles and responsibilities. In the report, EPA stated that it believes its homeland security roles and responsibilities are sufficiently delineated not only through statutes, presidential directives, and existing MOUs, but also through planning documents and deliverables associated with a wide variety of collaborative homeland security-related projects that EPA and DHS are carrying out. In December 2002, the Association of Metropolitan Water Agencies (AMWA) received a grant from EPA to establish a communication system to share security information with water sector utilities, known as the Water Information Sharing and Analysis Center (WaterISAC). The WaterISAC is one of thirteen critical infrastructure and key resource sector- specific information sharing and analysis centers. The WaterISAC was designed to meet the information sharing needs of both water and wastewater utilities by providing real-time alerts of possible terrorist activity, allowing for the secure reporting of incidents and the sharing of information among users, and allowing access to a library of security- related information and contaminant databases. Beginning in fiscal year 2003, EPA has annually provided AMWA with a $2 million grant to support the WaterISAC. This grant is augmented by subscription fees paid by drinking water and wastewater systems. In November 2004, the WaterISAC launched a free security advisory system known as the Water Security Channel that distributes federal advisories on security threats via e-mail to the water sector. The Water Security Channel also includes a searchable archive of federal alerts, advisories, and bulletins. However, it does not provide access to the same level of service as the subscription- based WaterISAC. WaterISAC subscribers receive additional services, including a secure communication system, access to vulnerability assessment tools and resources, access to an online library related to water security issues, and access to databases about chemical, biological, and radiological agents. DHS has also sought to enhance communication between critical infrastructure sectors and the government. Under the Homeland Security Act of 2002, DHS is responsible for reducing the vulnerability of the national infrastructure and for coordinating and communicating with all key stakeholders on homeland security-related matters. According to DHS, to fulfill this mandate, it requires a communication system that provides equal and appropriate access to security information to all owners and operators of critical infrastructure and key resources. In 2004, it piloted a new secure network, the Homeland Security Information Network (HSIN), to help achieve this mandate. HSIN is DHS’ primary conduit through which it shares information on domestic terrorist threats, suspicious activity reports, and incident management. It is composed of multiple communities of interest, including the HSIN Critical Sector (HSIN-CS) program, which is intended to enhance the protection, preparedness, and crisis communication and coordination capabilities of the nation’s 17 critical infrastructure and key resource sectors identified in HSPD-7. The HSIN platform for critical sectors is being developed and offered to each sector to provide a suite of information and communication tools to share critical information both within the sector, with DHS, and eventually across sectors. Because the water sector is one of the nation’s 17 critical infrastructure and key resources, a HSIN-CS portal for the sector, called HSIN Water Sector (HSIN-WS), is currently being developed by DHS. A Water Sector Coordinating Council was also established by the water sector with representative members of the water sector community and charged with identifying information and other needs of the sector, including the appropriate use of and the relationship among Water ISAC, the Water Security Channel, and HSIN. While these efforts are helping to improve communication, staff at EPA and DHS, as well as other industry experts with whom we spoke, have expressed concern that the evolution of the information sharing and dissemination function for the water sector has resulted in several inefficiencies. WaterISAC access is limited to drinking water and wastewater subscribers, plus a restricted number of subscribers from EPA and the state drinking water programs. For example, the agreement limits designated users to five individuals at EPA headquarters and one person in each EPA region, for a total of fifteen EPA users. States are limited to only two users. EPA staff note that access for others in the sector, such as the technical service community, universities, training centers and laboratories, would benefit the overall protection of drinking water and wastewater critical infrastructures. EPA and DHS staff told us that, depending upon the user policy established by the sector, the HSIN network could allow for broader sharing of access than currently available under the WaterISAC. Only a small portion of the water sector is reached by the WaterISAC. According to EPA staff, just over 530 utilities are reached by the WaterISAC, while over 8,000 utilities receive information through the Water Security Channel. However, the Water Security Channel does not provide the same level of notification and information sharing provided by the WaterISAC. The Water Security Channel is essentially a “push e- mail system” that sends out general security bulletins to water utilities and other users, and allows for searches of previous bulletins. This service is much more limited than that provided to WaterISAC subscribers, which provides a secure communication system for users to share information, access to vulnerability assessment tools and resources, access to an online library related to water security issues, and access to databases about chemical, biological, and radiological agents. One water industry representative told us that the WaterISAC recently lowered its subscription fees due to industry concerns that the fees were limiting WaterISAC subscriptions. EPA staff told us that the water sector generally has less funding available to support ISAC services than other sectors such as electric, financial, and transportation. WaterISAC duplicates some operational functions likely available through HSIN. EPA estimates that roughly $600,000 to $700,000 of the annual $2 million WaterISAC grant is used to support computer hardware and software for the secure web portal. Meanwhile, to support HSIN, DHS funds similar computer software and hardware and its related technical support. EPA staff noted that WaterISAC could make use of the software and hardware platform available through HSIN. EPA staff believed that WaterISAC could then better focus its resources on managing its user list, managing information content on the secure web site, and analyzing and distributing threat information, while leaving DHS to manage and run the hardware and software. The current reach and levels of service offered by the WaterISAC and the Water Security Channel do not meet DHS’ objective to establish a communication system that provides equal and appropriate access to security information to all owners and operators in this critical infrastructure area. According to EPA and DHS staff, the Water Sector Coordinating Council will consider options to improve coordination between the WaterISAC, the Water Security Channel, and HSIN. Using funding from the supporting grant from EPA, the WaterISAC is currently examining options for coordination between the WaterISAC, the Water Security Channel, and HSIN. EPA noted that this review is ongoing and will likely be presented in preliminary form to the Water Sector Coordinating Council in a mid-March 2006 meeting. However, the scope of the preliminary review is not clear, nor is a time frame set to complete the review. According to DHS, the creation of the DHS Homeland Infrastructure Threat and Risk Analysis Center will assist in information sharing of intelligence threat information between DHS and federal, state, and private sector partners. Many of the nation’s large wastewater facilities have made security improvements since the terrorist attacks of September 11, 2001. Of particular note, many have completed some type of security assessment, and additional facilities have such assessments under way. Our survey also found that wastewater facilities are continuing to move away from the use of potentially dangerous gaseous chlorine as a wastewater disinfectant. One area of continuing concern is the difficulty these facilities are having in addressing vulnerabilities associated with their collection systems. Facility managers explained that with limited funding available, other important measures considered to be more feasible and affordable were assigned greater priority. EPA is attempting to help address this difficult issue through funding the American Society of Civil Engineers project to develop voluntary physical security standards for the water sector. Despite limited federal authority over security at the nation’s wastewater facilities, EPA, as the lead agency for water sector security, has worked with DHS and industry groups to advance wastewater security by providing vulnerability assessment tools, training, guidance, and burgeoning information sharing networks. These efforts, combined with the individual initiatives of many wastewater facilities, have resulted in measurable security improvements. However, these efforts could benefit from additional coordination, and we acknowledge and support EPA’s and DHS’ commitment to do so. As these agencies move forward, we believe they should act upon the opportunities we have identified that could improve both the efficiency with which limited dollars are being spent, as well as the delivery of vital information services to the wastewater community. Specifically, a substantial part of the $2 million annual EPA grant that funds WaterISAC goes to support a computer platform that may be available at no cost through HSIN. We recommend that the Administrator of EPA work with DHS and the Water Sector Coordinating Council to identify areas where the WaterISAC and HSIN networks could be better coordinated, focusing in particular on (1) how operational duplications and overlap could be addressed, and (2) how water systems’ access to timely security threat information could be improved. We also recommend that EPA work with DHS and the Water Sector Coordinating Council to identify realistic time frames for the completion of these tasks. We provided a draft of this report to DHS and EPA for review and comment. DHS agreed with the factual content of the report, and its Office of Infrastructure Protection provided written technical comments and clarifications that have been incorporated, as appropriate. In its letter, reproduced in appendix III, EPA concurred with the results of the report. EPA’s Water Security Division in the Office of Ground Water and Drinking Water also provided technical comments and clarifications that were incorporated, as appropriate. As agreed with your office, unless you publicly release the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees; interested Members of Congress; the Administrator, Environmental Protection Agency; the Secretary, Department of Homeland Security; and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff need further information, please contact me at (202) 512-3841 or stephensonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To identify federal statutory authorities and directives that govern protection of wastewater treatment facilities, we reviewed applicable laws, Homeland Security Presidential Directives, and policies, guidance, and regulations related to wastewater security from the Environmental Protection Agency (EPA) and the Department of Homeland Security (DHS). In addition, we interviewed officials in EPA’s Water Security Division, as well as DHS officials in various areas of the agency. In addition, we spoke with representatives for wastewater industry associations with which EPA has collaborated to actively assist wastewater treatment facilities to address their security issues. To determine the steps critical wastewater treatment facilities have taken since 9/11 to address potential vulnerabilities, we conducted a Web-based survey of the nation’s largest wastewater treatment facilities. For the purpose of this review, we defined “critical wastewater facilities” as the 253 wastewater facilities in the United States that have service area populations of 100,000 or greater, as identified in the results of EPA’s 2004 Clean Watershed Needs Survey. As a result of Hurricane Katrina, one facility in our initial population of 253 facilities that was identified as a New Orleans facility was omitted, leaving a total 252 facilities in our survey population. We drafted the survey in consultation with our own survey professionals. In addition, we solicited the review and comment of knowledgeable officials from the National Academy of Sciences, the Water Environment Federation, and the National Association of Clean Water Agencies, as well as several wastewater security experts identified in our January 2005 report on wastewater security. We conducted seven pretests to check that (1) the questions were clear and unambiguous, (2) terminology was used correctly, (3) the information was feasible to obtain, and (4) the survey was comprehensive and unbiased. The pretest sites were chosen to include facilities representing different geographic regions, and utilities both with single and multiple facilities. One pretest was done in person and six were done over the phone. Our survey asked wastewater treatment facility representatives to provide a variety of information, such as whether their facilities had conducted security assessments; what measures, if any, they had taken or were planning to take in several security areas; and their perspectives on what role the federal government should assume in wastewater treatment facility security. The survey was made available between October 1, 2005, and January 15, 2006, and a unique user identification number and a password were provided to each surveyed facility. Three e-mail reminders were sent out to nonresponders, and then follow-up phone calls were made to all nonresponding facilities. A total of 206 of 252 wastewater treatment facilities responded to the survey, resulting in an 82 percent survey response rate. Other wastewater facilities that did not respond to the survey generally cited security concerns related to providing potentially sensitive information or a general policy of not answering surveys. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as non-sampling errors. For example, difficulties in how a particular question is interpreted or in the sources of information that are available to respondents can introduce unwanted variability into the survey results. We took steps both at the data collection and at the analysis phases to minimize these non-sampling errors. Since this was a Web-based survey, respondents entered their answers directly into the electronic questionnaire, which removes one source of error. When the data were analyzed, a second, independent analyst checked all relevant computer programs. To determine what steps EPA and DHS have taken to help wastewater facilities in their efforts to address vulnerabilities, we took several approaches. First, through semi-structured interviews with agency officials and industry association representatives, as well as document reviews, we researched various programs that EPA and DHS have under way. Second, we identified programs that require cross-agency collaboration between EPA and DHS, and we examined in depth those that wastewater treatment facility representatives identified as potentially useful. We also interviewed state and local officials with oversight for wastewater treatment operations and security. Third, one section of our survey gathered information about facility representatives’ experiences with, perspectives on, and expectations for, the federal role in wastewater treatment facility security. Responses to open-ended questions were categorized and tallied to analyze their content for subsequent research findings. Finally, to develop conclusions about the level of coordination between the two agencies in the implementation of these programs, we interviewed agency officials about their perspectives on how well the agencies are working together. Welcome to the Survey of Wastewater Treatment Facilities. The U.S. Government Accountability Office (GAO), a congressional audit and evaluation agency, is conducting this survey to identify actions wastewater treatment facilities have taken to protect their operations and infrastructures from terrorism or other threats. Why you are receiving this information: The GAO is surveying wastewater treatment facilities that serve residential populations of 100,000 or greater. An EPA database identified you as the point of contact for your wastewater treatment utility/authority. As a result, we are requesting that you or your designated representative complete the survey. Please complete this survey within two weeks of receipt. We understand that there are great demands on your time. However, your participation in our study is essential for us to provide relevant information to Congress about the actions wastewater treatment facilities have taken to protect their operations from terrorism or other security threats. We greatly appreciate your time and effort in completing this survey. Your responses will be gathered on a SECURED SERVER AND AGGREGATED WITH THOSE OF OTHER FACILITIES. They will be presented in a report to the Congress IN A SUMMARY FORM ONLY. GAO WILL NOT RELEASE INDIVIDUALLY IDENTIFIABLE DATA FROM THIS SURVEY, unless compelled by law or required to do so by the U.S. Congress. on skip instructions; some respondents were directed to answer certain survey questions and not others based on their earlier responses. 1. About Your Utility 1. What is the name of the wastewater treatment utility/authority that is responsible for the facility or facilities for which you are completing a survey(s)? (Click in the box and enter the name.) 2. Does your utility/authority manage BOTH drinking water and wastewater treatment? Check only one answer. 1. 2. 3. Don’t know/No response 3. How many wastewater treatment facilities within your utility/authority serve populations of 100,000 or more? Check only one answer. 1. 2. 3. 4. 5. 6. 6 or more (Please specify number below.) 7. Don’t know/No response 4. (If you checked "6 or more" above) What is the number of wastewater treatment facilities within your utility/authority that serve populations of 100,000 or more? (Click in the box and enter 1 or 2-digit whole number.) N = 26 2. About Your Facility 5. What is the size of the service area population served by your wastewater treatment FACILITY under regular operating conditions? Check only one answer. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. Don’t know/No response *These facilities were identified in the results of EPA’s 2004 Clean Watershed Needs Survey as having service area populations of 100,000 or over and were kept in our survey. 6. Has a vulnerability assessment been completed for your wastewater treatment facility? Check only one answer. 1. Yes (GO TO QUESTION 8.) 2. Currently underway (GO TO QUESTION 8.) 3. 4. Don’t know/No response 7. Has a security or risk assessment been completed for your wastewater treatment facility? Check only one answer. 1. 2. 3. No (GO TO QUESTION 17.) 4. Don’t know/No response (GO TO QUESTION 17.) 8. When was your vulnerability/security assessment completed or most recently updated, or when is it scheduled to be completed or updated? Responses ranged from April 2001 to July 2006. Most indicated assessments were completed, updated, or scheduled to be completed or updated in 2003. 9. Were the following factors important in deciding to conduct this vulnerability/security assessment for your facility? Check one in each row. a. Required by state government b. Required by local government or c. Required by utility governing d. Required by facility insurance drinking water facility, as required g. Conducted assessment on facility's 10. What other factors, if any, were important in deciding to conduct this vulnerability/security assessment at your facility? (Click in the box and enter your response. Leave blank if all important factors are listed above.) We did not summarize the narrative responses to this question for inclusion in this appendix. 11. Who conducted or is conducting the vulnerability/security assessment for your wastewater treatment facility? Check one in each row. c. City or county staff (e.g. alarm company personnel) e. Other (Please specify in 12 below.) 12. (If you checked yes for "Other" in 11) Who else is involved in conducting the vulnerability/security assessment at your facility? We did not summarize the narrative responses to this question for inclusion in this appendix. 13. What vulnerability/security assessment tool, if any, did your wastewater facility use? Check one in each row. Tool (VSAT) b. Risk Assessment Methodology for Water (RAM-W, developed by Sandia National Laboratories) Management System (SEMS, developed by the National Rural Water Association) d. County or city developed its own e. Utility developed its own g. Other (Please specify below.) 14. (If "Other") What other vulnerability/security assessment tool did your wastewater facility use? We did not summarize the narrative responses to this question for inclusion in this appendix. 15. As a result of the vulnerability/security assessment, what were the 3 most significant measures (activities, changes and improvements) that have been completed to improve security at your wastewater treatment facility? We did not summarize the narrative responses to this question for inclusion in this appendix. 16. In which of the following time frames, if any, do you plan to update your vulnerability/security assessment? Check only one answer. 1. No plans to update (GO TO QUESTION 27.) 2. Continuous process (GO TO QUESTION 27.) 3. In about 1 year (GO TO QUESTION 27.) 4. In about 2 years (GO TO QUESTION 27.) 5. In about 3 years (GO TO QUESTION 27.) 6. In the next 4 years or more (GO TO QUESTION 27.) 7. Plan to update, but no time frame set (GO TO QUESTION 27.) 8. Don’t know/No response (GO TO QUESTION 27.) End of questions for facilities that completed vulnerability assessments/security assessments. GO TO QUESTION 27. 17. Does your facility plan to conduct a vulnerability/security assessment? Check only one answer. 1. Yes (GO TO QUESTION 20.) 2. 3. Don’t know/No response 18. Were the following factors important in your wastewater treatment facility's decision NOT to conduct a vulnerability/security assessment? Check one in each row. a. Steps we have taken are adequate b. Facility and system are not c. Other priorities and limited d. Emergency response plan was updated, and this seemed sufficient to f. Not required to do so 19. What OTHER factors, if any, were important in deciding NOT to conduct a vulnerability/security assessment? We did not summarize the narrative responses to this question for inclusion in this appendix. End of questions for facilities that have decided NOT to complete vulnerability assessments/security assessments. Click on GO TO QUESTION 30. 20. When is your facility's vulnerability/security assessment scheduled to be completed? Responses ranged from November 2005 to December 2007. Most facilities indicated assessments were scheduled to be completed in 2006. 21. Were the following factors important in deciding to conduct this vulnerability/security assessment at your facility? Check one in each row. a. Required by state government b. Required by local government or c. Required by utility governing d. Required by facility insurance drinking water facility, as required by the Bioterrorism Act 22. What OTHER factors, if any, were important in deciding to conduct this vulnerability/security assessment at your facility? We did not summarize the narrative responses to this question for inclusion in this appendix. 23. Who will conduct the vulnerability/security assessment for your wastewater treatment facility? Check one in each row. c. City or county staff (e.g. alarm company personnel) e. Other (Please specify below.) 24. (If "Other") Who else will be involved in conducting the vulnerability/security assessment at your facility? N = 0 25. What vulnerability/security assessment tool, if any, did your wastewater facility use? Check one in each row. Tool (VSAT) b. Risk Assessment Methodology for Water (RAM-W, developed Laboratories) Management System (SEMS, developed by the National Rural Water Association) d. County or city developed its own e. Utility developed its own g. Other(Please specify below.) 26. (If "Other") What other vulnerability/security assessment tool will your wastewater facility use? We did not summarize the narrative responses to this question for inclusion in this appendix. 4. Chemical Security Measures 27. Does your wastewater treatment facility use gaseous chlorine to disinfect wastewater? Check only one answer. 1. Yes (GO TO QUESTION 32.) 2. 3. 28. Did your wastewater treatment facility EVER use gaseous chlorine to disinfect wastewater? Check only one answer. 1. 2. No (GO TO QUESTION 31.) 3. Don’t know/No response (GO TO QUESTION 31.) 29. When did your wastewater treatment facility decide to discontinue its use of gaseous chlorine? Check only one answer. Responses ranged from July 1980 to October 2005. Many facilities made the decision to discontinue the use of gaseous chlorine between 1998 and 2001. 30. How have ANNUAL (as opposed to capital) costs at your wastewater treatment facility been affected by converting to an alternative disinfection process? Check only one answer. 1. 2. Costs have stayed about the same 3. 4. Don’t know/No response 31. What disinfection method does your wastewater treatment facility use? Check only one answer. 1. Sodium hypochlorite (transported to site) (GO TO QUESTION 40.) 2. Sodium hypochlorite (generated onsite) (GO TO QUESTION 40.) 3. Calcium hypochlorite (GO TO QUESTION 40.) 4. Ozone (GO TO QUESTION 40.) 5. Ultraviolet light (GO TO QUESTION 40.) 6. Other (GO TO QUESTION 40.) 7. Don’t know/No response (GO TO QUESTION 40.) End of questions for facilities that do not use gaseous chlorine. GO TO QUESTION 40. 32. How is the chlorine transported and stored? Check only one answer. 1. In 50 - 100 pound cylinders 2. 3. 4. 5. 6. Other (Please specify number below.) 7. Don’t know/No response 33. (If "Other") By what other method is the chlorine transported and stored? We did not summarize the narrative responses to this question for inclusion in this appendix. 34. Which one of the following changes, if any, has your wastewater treatment facility made in how it stores and uses gaseous chlorine and when did it make them? Check one in each row. a. Converted chlorine to 35. (If you indicated "Yes" for "Made other physical security improvements" above) What other physical security improvements to the gaseous chlorine storage area did your facility make? We did not summarize the narrative responses to this question for inclusion in this appendix. 36. Does your wastewater treatment facility PLAN to stop using gaseous chlorine? Check only one answer. 1. 2. No (GO TO QUESTION 40.) 3. Don’t know/No response (GO TO QUESTION 40.) 37. When does your wastewater treatment facility PLAN to stop using gaseous chlorine? Responses ranged from November 2005 to January 2015. Most indicated that their facilities planned to discontinue their use of gaseous chlorine in 2006 and 2007. 38. How do you expect ANNUAL (as opposed to capital) costs at your wastewater treatment facility to be affected by converting to an alternative disinfection process? Check only one answer. 1. 2. Costs will stay about the same 3. 4. Don’t know/No response 39. What disinfection method does your wastewater treatment facility plan to use? Check only one answer. 1. Sodium hypochlorite (transported to site) 2. Sodium hypochlorite (generated onsite) 3. 4. 5. 6. 7. Don’t know/No response 40. What OTHER chemicals, if any, are currently at your wastewater treatment facility, or will be in the future, that have the potential to cause significant harm and damage if used for terrorist activity? We did not summarize the narrative responses to this question for inclusion in this appendix. 41. For each chemical identified in 40, if any, please indicate its use at your wastewater treatment facility. We did not summarize the narrative responses to this question for inclusion in this appendix. 5. Physical Security Measures 42. For each of the following physical security measures that your wastewater treatment facility may have, was it completed before September 11, 2001, completed after September 11, 2001, is it planned but not yet completed, or is it NOT planned? Check one in each row. power sources operation (e.g. smart pipes) (LEL) meters or 43. What OTHER physical security improvements, that you consider significant, have been installed or are planned for your wastewater treatment facility and associated infrastructure? Be sure to include any specific changes that your wastewater facility has made to improve physical security in your COLLECTION SYSTEM, if they are not listed above. We did not summarize the narrative responses to this question for inclusion in this appendix. 6. Personnel Security Measures 44. For each of the following personnel security measures that your wastewater treatment facility may have, if any, was it completed before September 11, 2001, completed after September 11, 2001, is it planned but not yet completed, or is it NOT planned? Check one in each row. visitors, etc.) are identification visitors, etc.) may 45. What OTHER personnel security improvements, that you consider significant, have been installed or are planned for your wastewater treatment facility and associated infrastructure? We did not summarize the narrative responses to this question for inclusion in this appendix. 7. Information Technology (IT) Security Measures 46. For each of the following Information Technology (IT) security measures that your wastewater treatment facility may have, was it completed before September 11, 2001, completed after September 11, 2001, is it planned but not yet completed, or is it NOT planned? Check one in each row. b. Network protection, such as a firewall, an 47. What OTHER IT security improvements, that you consider significant, have been installed or are planned for your wastewater treatment facility and associated infrastructure? We did not summarize the narrative responses to this question for inclusion in this appendix. 48. Does your facility have a designated individual to oversee internal security initiatives at your wastewater treatment facility? Check only one answer. 1. 2. No (GO TO QUESTION 50.) 3. Don’t know/No response (GO TO QUESTION 50.) 49. Does this person oversee efforts to address security that require coordination with other organizations and governments? Check only one answer. 1. 2. 3. Don’t know/No response 50. Does your wastewater treatment facility have periodic (annually, semi annually or more often) contact about security planning with the following? Check one in each row. b. Police and fire departments Committee (LEPC) DK/NR = 23 e. Industry organizations, e.g. Water Environment Federation (WEF), National Association of Clean Water Agencies (NACWA) g. Hazardous material storage facilities h. State organizations and agencies with oversight for wastewater operations and security (Please specify below.) i. Federal organizations and agencies with oversight for wastewater operations and security (Please specify below.) 51. (If you indicated "yes" for "State organizations and agencies" above) Which state organizations and agencies with oversight for wastewater operations and security does your facility have periodic contact about security planning? We did not summarize the narrative responses to this question for inclusion in this appendix. (If you indicated "yes" for "Federal organizations and agencies" above) Which federal 52. organizations and agencies with oversight for wastewater operations and security does your facility have periodic contact about security planning We did not summarize the narrative responses to this question for inclusion in this appendix. 53. Which of the following security coordination activities has your wastewater treatment facility implemented or participated in? DK/NR = 36 k. Developed an emergency notification l. Participate in an emergency facilities in service area, with m. Participated in a security tabletop o. Developed training materials p. Created or are part of a mutual aid q. Met with FBI Field Office to discuss protocols and other security 54. What role, if any, do you think the federal government should play in COORDINATING security initiatives at wastewater treatment facilities and their associated infrastructures? We did not summarize the narrative responses to this question for inclusion in this appendix. 9. Conclusions and Additional Comments 55. What are the 3 most significant security measures that have been put in place at your wastewater treatment facility and associated infrastructure? We did not summarize the narrative responses to this question for inclusion in this appendix. 56. Of those security measures that you have PLANNED, BUT HAVE NOT YET PUT IN PLACE, which 3 measures do you anticipate will be the most useful to improving security at your wastewater treatment facility and associated infrastructure? We did not summarize the narrative responses to this question for inclusion in this appendix. 57. What 3 security measures, if any, would you put in place if you were free to implement any measures you viewed as potentially useful to improving security at your facility and associated infrastructure We did not summarize the narrative responses to this question for inclusion in this appendix. 58. For the security measures you listed in 57 above, were any of the following factors important in your facility not yet implementing these measures? (Check one in each row. Leave blank if you did not list any security measures in 57.) a. Not enough time c. Other security priorities more critical d. Technology needed not readily e. Necessary agreements not in place f. Other (Please specify below.) 59. (If you checked "yes" for Other) What other factors were important in your facility not yet implementing these measures? We did not summarize the narrative responses to this question for inclusion in this appendix. 60. What do you consider to be the major challenges to reducing vulnerabilities to the wastewater treatment collection systems' network of sanitary, storm, and combined sewer lines? We did not summarize the narrative responses to this question for inclusion in this appendix. 61. What, if any, are the most innovative or effective practices that address these challenges to reducing vulnerabilities to the wastewater treatment collection system? We did not summarize the narrative responses to this question for inclusion in this appendix. 62. What are the most important things, if any, the federal government could do to improve the security at your facility and other wastewater treatment facilities nationwide? We did not summarize the narrative responses to this question for inclusion in this appendix. 63. If you would like to provide additional comments concerning security at your wastewater treatment facility, specifically, and/or security at wastewater treatment facilities in general, please provide them in the space below. We did not summarize the narrative responses to this question for inclusion in this appendix. 64. Have you finished this questionnaire? Check only one answer. 1. 2. In addition to the contact named above, Nancy Bowser, Jenny Chanley, Steve Elstein, Greg Marchand, Tim Minelli, Cynthia Norris, Jerry Sandau, Rebecca Spithill, and Monica Wolford made key contributions to this report.
Wastewater facilities provide essential services to residential, commercial, and industrial users, yet they may possess certain characteristics that terrorists could exploit to impair the wastewater treatment process or to damage surrounding infrastructure. For example, large underground collector sewers could be accessed by terrorists for purposes of placing destructive devices beneath buildings or city streets. GAO was asked to determine (1) what federal statutory authorities and directives govern the protection of wastewater treatment facilities from terrorist attack, (2) what steps critical wastewater facilities have taken since the terrorist attacks of September 11, 2001, (9/11) to ensure that potential vulnerabilities are addressed, and (3) what steps the Environmental Protection Agency (EPA) and the Department of Homeland Security (DHS) have taken to help these facilities in their efforts to address such vulnerabilities. Federal law does not address wastewater security as comprehensively as it does drinking water security. For example, the Public Health Security and Bioterrorism Preparedness and Response Act of 2002 required drinking water facilities serving populations greater than 3,300 to complete vulnerability assessments, but no such requirement exists for wastewater facilities. While federal law governing wastewater security is limited, Homeland Security Presidential Directive 7 designated EPA as the lead agency to oversee the security of the water sector, including both drinking water and wastewater. The directive tasked EPA with several responsibilities, including the development of mechanisms for information sharing and analysis within the water sector. Our survey of over 200 of the nation's large wastewater facilities shows that many have made security improvements since 9/11. Most facilities indicated they have completed, have under way, or plan to complete some type of security assessment. Similarly, more than half of responding facilities indicated they did not use potentially dangerous gaseous chlorine as a wastewater disinfectant. Survey responses show that other security measures taken after 9/11 have generally focused on controlling access to the treatment plant through improvements in visual surveillance, security lighting, and employee and visitor identification. Little effort, however, has been made to address collection system vulnerabilities, as many facilities cited the technical complexity and expense involved in securing collection systems that cover large areas and have many access points. Others reported that taking other measures, such as converting from gaseous chlorine, took priority over collection system protections. While EPA and DHS have initiatives to address wastewater facility security, efforts to provide critical and threat-related information would benefit from closer coordination. EPA and DHS fund multiple information services designed to communicate information to the water sector--specifically, EPA funds the Water Information Sharing and Analysis Center (WaterISAC) and its Water Security Channel, while DHS funds the Homeland Security Information Network (HSIN). EPA, DHS, and other industry experts are concerned that these multiple information services may overlap and produce inefficiencies. For example, a substantial part of the $2 million annual grant EPA uses to fund the WaterISAC is dedicated to purchasing computer services likely available through DHS and HSIN at no cost. A Water Sector Coordinating Council was established by the water sector to help determine the appropriate relationship among these information services. A preliminary review is under way to examine options for improving coordination between the WaterISAC, the Water Security Channel, and HSIN; however, the scope and time frame for completion of this review is unclear.
Energy oversees a nationwide network of 40 contractor-operated industrial sites and research laboratories that have historically employed more than 600,000 workers in the production and testing of nuclear weapons. In implementing EEOICPA, the President acknowledged that it had been Energy’s past policy to encourage and assist its contractors in opposing workers’ claims for state workers’ compensation benefits based on illnesses said to be caused by exposure to toxic substances at Energy facilities. Under the new law, workers or their survivors could apply for assistance from Energy in pursuing state workers’ compensation benefits, and if they received a positive determination from Energy, the agency would direct its contractors to not contest the workers’ compensation claims or awards. Energy’s rules to implement the new program became effective in September 2002, and the agency began to process the applications it had been accepting since July 2001, when the law took effect. Energy’s claims process has several steps, as shown in figure 1. First, claimants file applications and provide all available medical evidence. Energy then develops the claims by requesting records of employment, medical treatment, and exposure to toxic substances from the Energy facilities at which the workers were employed. If Energy determines that the worker was not employed by one of its facilities or did not have an illness that could be caused by exposure to toxic substances, the agency finds the claimant ineligible. For all others, once development is complete, a panel of three physicians reviews the case and decides whether exposure to a toxic substance during employment at an Energy facility was at least as likely as not to have caused, contributed to, or aggravated the claimed medical condition. The panel physicians are appointed by the National Institute for Occupational Safety and Health (NIOSH) but paid by Energy for this work. Claimants receiving positive determinations are advised that they may wish to file claims for state workers’ compensation benefits. Claimants found ineligible or receiving negative determinations may appeal to Energy’s Office of Hearings and Appeals. Each of the 50 states and the District of Columbia has its own workers’ compensation program to provide benefits to workers who are injured on the job or contract a work-related illness. Benefits include medical treatment and cash payments that partially replace lost wages. Collectively, these state programs paid more than $46 billion in cash and medical benefits in 2001. In general, employers finance workers’ compensation programs. Depending on state law, employers finance these programs through one of three methods: (1) they pay insurance premiums to a private insurance carrier, (2) they contribute to a state workers’ compensation fund, or (3) they set funds aside for this purpose as self- insurance. Although state workers’ compensation laws were enacted in part as an attempt to avoid litigation over workplace accidents, the workers’ compensation process is still generally adversarial, with employers and their insurers tending to challenge aspects of claims that they consider not valid. State workers’ compensation programs vary as to the level of benefits, length of payments, and time limits for filing. For example, in 1999, the maximum weekly benefit for a total disability in New Mexico was less than $400, while in Iowa it was approximately $950. In addition, in Idaho, the weekly benefit for total disability would be reduced after 52 weeks, while in Iowa benefits would continue at the original rate for the duration of the disability. Further, in Tennessee, a claim must be filed within 1 year of the beginning of incapacity or death. However, in Kentucky a claim must be filed within 3 years of exposure to most substances, but within 20 years of exposure to radiation or asbestos. As of June 30, 2003, Energy had completely processed about 6 percent of the nearly 19,000 cases that had been filed, and the majority of all cases filed were associated with facilities in nine states. Forty percent of cases were in processing, but more than 50 percent remained unprocessed. While some case characteristics can be determined, such as illness claimed, systems limitations prevent reporting on other case characteristics, such as the reasons for ineligibility or basic demographics. During the first 2 years of the program, ending June 30 2003, Energy had fully processed about 6 percent of the nearly 19,000 claims it received. The majority of these claims had been found ineligible because of either a lack of employment at an eligible facility or an illness related to toxic exposure. Of the cases that had been fully processed, 42 cases—less than one-third of 1 percent of the nearly 19,000 cases filed—had a final determination from a physician panel. More than two-thirds of these determinations (30 cases) were positive. At the time of our study, Energy had not yet begun processing more than half of the cases, and an additional 40 percent of cases were in processing (see fig. 2). The majority of cases being processed were in the case development stage, where Energy requests information from the facility at which the claimant was employed. Less than 1 percent of cases in process were ready for physician panel review, and an additional 1 percent were undergoing panel review. A majority of cases were filed early during program implementation, but new cases continue to be filed. Nearly two-thirds of cases were filed within the first year of the program, between July 2001 and June 2002. However, in the second year of the program—between July 2002 and June 30, 2003—Energy continued to receive more than 500 cases per month. Energy officials report that they currently receive approximately 100 new cases per week. While cases filed are associated with facilities in 38 states or territories, the majority of cases are associated with Energy facilities in nine states (see fig. 3). Facilities in Colorado, Idaho, Iowa, Kentucky, New Mexico, Ohio, South Carolina, Tennessee, and Washington account for more than 75 percent of cases received by June 30, 2003. The largest group of cases is associated with facilities in Tennessee. Workers filed the majority of cases, and cancer is the most frequently reported illness. Workers filed about 60 percent of cases, and survivors of deceased workers filed about 36 percent of cases. In about 1 percent of cases, a worker filed a claim that was subsequently taken up by a survivor. Cancer is the illness reported in more than half of the cases. Diseases affecting the lungs accounted for an additional 14 percent of cases. Specifically, chronic beryllium disease is reported in 1 percent of cases, and beryllium sensitivity, which may develop into chronic beryllium disease, is reported in an additional 5 percent. About 7 percent of cases reported asbestosis, and less than 1 percent claimed silicosis. Systems limitations prevent Energy officials from aggregating certain information important for program management. For example, the case management system does not collect information on the reasons that claimants had been declared ineligible or whether claimants have appealed decisions. Systematic tracking of the reasons for ineligibility would make it possible to identify other cases affected by appeal decisions that result in policy changes. While Energy officials report that during the major systems changes that occurred in July 2003, fields were added to the system to track appeals information, no information is yet available regarding ineligibility decisions. In addition, basic demographic data such as age and gender of claimants are not available. Gender information was not collected for the majority of cases. Further, insufficient edit controls— for example, error checking that would prevent claimants’ dates of birth from being entered if the date was in the future—prevent accurate reporting on claimants’ ages. Insufficient strategic planning regarding data collection and tracking have made it difficult for Energy officials to completely track case progress and determine whether they are meeting the goals they have established for case processing. For example, Energy established a goal of completing case development within 120 days of case assignment to a case manager. However, the data system developed by contractors to aid in case management was developed without detailed specifications from Energy and did not originally collect sufficient information to track Energy’s progress in meeting this 120-day goal. Furthermore, status tracking has been complicated by changes to the system and failure to consistently update status as cases progress. While Energy reports that changes made as of July 2003 should allow for improved tracking of case status, it is unclear whether these changes will be applied retroactively to status data already in the system. If they are not, Energy will still lack complete data regarding case-processing milestones achieved prior to these changes. Our analysis shows that a majority of cases associated with major Energy facilities in nine states will potentially have a willing payer of workers’ compensation benefits. This finding reflects the number of cases for which contractors and their insurers are likely to not contest a workers’ compensation claim, rather than the number of cases that will ultimately be paid. The contractors considered to be willing payers are those that have an order from, or agreement with, Energy to not contest claims. However, there are likely to be many claimants who will not have a willing payer in certain states, such as Ohio and Iowa. For all claimants, additional factors such as state workers’ compensation provisions or contractors’ uncertainty on how to compute the benefit may affect whether or how much compensation is paid. A majority of cases in nine states will potentially have a willing payer of workers’ compensation benefits, assuming that for all cases there has been a positive physician panel determination and the claimant can demonstrate a loss from the worker’s illness that has not previously been compensated. Specifically, based on our analysis of workers’ compensation programs and the different types of workers’ compensation coverage used by the major contractors, it appears that approximately 86 percent of these cases will potentially have a willing payer—that is, contractors and their insurers who will not contest the claims for benefits. It was necessary to assume that all cases filed would receive a positive determination by a physician panel because sufficient data are not available to project the outcomes of the physician panel process. More specifically, there are indications that the few cases that have received determinations from physician panels may not be representative of all cases filed, and sufficient details on workers’ medical conditions were not available to enable us to independently judge the potential outcomes. In addition, we assumed that all workers experienced a loss that was not previously compensated because sufficient data were not available to enable us to make more detailed projections on this issue. As shown in table 1, most of the contractors for the major facilities in these states are self-insured, which enables Energy to direct them to not contest claims that receive a positive medical determination. In addition, the contractor in Colorado, which is not self-insured but has a commercial policy, took the initiative to enter into an agreement with Energy to not contest claims. The contractor viewed this action as being in its best interest to help the program run smoothly. However, it is unclear whether the arrangement will be effective because no cases in Colorado have yet received compensation. In such situations where there is a willing payer, the contractor’s action to pay the compensation consistent with Energy’s order to not contest a claim will override state workers’ compensation provisions that might otherwise result in denial of a claim, such as failure to file a claim within a specified period of time. However, since no claimants to date have received compensation as a result of their cases filed with Energy, there is no actual experience about how contractors and state workers’ compensation programs treat such cases. About 14 percent of cases in the nine states we analyzed may not have a willing payer. Therefore, in some instances these cases may be less likely to receive compensation than a comparable case for which there is a willing payer, unless the claimant is able to overcome challenges to the claim. Specifically, these cases that lack willing payers involve contractors that (1) have a commercial insurance policy, (2) use a state fund to pay workers’ compensation claims, or (3) do not have a current contract with Energy. In each of these situations, Energy maintains that it lacks the authority to make or enforce an order to not contest claims. For instance, an Ohio Bureau of Workers’ Compensation official said that the state would not automatically approve a case, but would evaluate each workers’ compensation case carefully to ensure that it was valid and thereby protect its state fund. Concerns about the extent to which there will be willing payers of benefits have led to various proposals for addressing this issue. For example, the state of Ohio proposed that Energy designate the state as a contractor to provide a mechanism for reimbursing the state for paying the workers’ compensation claims. However, Energy rejected this proposal on the ground that EEOICPA does not authorize the agency to establish such an arrangement. In a more wide-ranging proposal, legislation introduced in this Congress proposes to establish Subtitle D as a federal program with uniform benefits administered by the Department of Labor. In contrast to Subtitle B provisions that provide for a uniform federal benefit that is not affected by the degree of disability, various factors may affect whether a Subtitle D claimant is paid under the state workers’ compensation program or how much compensation will be paid. Beyond the differences in the state programs that may result in varying amounts and length of payments, these factors include the demonstration of a loss resulting from the illness and contractors’ uncertainty on how to compute compensation. Even with a positive determination from a physician panel and a willing payer, claimants who cannot demonstrate a loss, such as loss of wages or medical expenses, may not qualify for compensation. On the other hand, claimants with positive determinations but not a willing payer may still qualify for compensation under the state program if they show a loss and can overcome all challenges to the claim raised by the employer or the insurer. Contractors’ uncertainty on how to compute compensation may also cause variation in whether or how much a claimant will receive in compensation. While contractors with self-insurance told us that they plan to comply with Energy’s directives to not contest cases with positive determinations, some contractors were unclear about how to actually determine the amount of compensation that a claimant will receive. For example, one contractor raised a concern that no guidance exists to inform contractors about whether they can negotiate the degree of disability, a factor that could affect the amount of the workers’ compensation benefit. Other contractors will likely experience similar situations, as Energy has not issued guidance on how to consistently compute compensation amounts. While not directly affecting compensation amounts, a related issue involves how contractors will be reimbursed for claims they pay. Energy uses several different types of contracts to carry out its mission, such as operations or cleanup, and these different types of contracts affect how workers’ compensation claims will be paid. For example, a contractor responsible for managing and operating an Energy facility was told to pay the workers’ compensation claims from its operating budget. The contractor said that this procedure may compromise its ability to conduct its primary responsibilities. On the other hand, a contractor cleaning up an Energy facility was told by Energy officials that its workers’ compensation claims would be reimbursed under its contract, and therefore paying claims would not affect its ability to perform cleanup of the site. As a result of Energy’s policies and procedures for processing claims, claimants have experienced lengthy delays in receiving the determinations they need to file workers’ compensation claims. In particular, the number of cases developed during initial case processing has not always been sufficient to allow the physician panels to operate at full capacity. Moreover, even if these panels were operating at full capacity, the small pool of physicians qualified to serve on the panels would limit the agency’s ability to produce more timely determinations. Energy has recently allocated more funds for staffing for case processing, but it is still exploring methods for improving the efficiency of its physician panel process. Energy’s case development process has not consistently produced enough cases to ensure that the physician panels are functioning at full capacity. To make efficient use of physician panel resources, it is important to ensure that a sufficient supply of cases is ready for physician panel review. Energy officials established a goal of completing the development on 100 cases per week by August 2003 to keep the panels fully engaged. However, as of September 2003, Energy officials stated that the agency was completing development of only about 40 cases a week. Further, while agency officials indicated that they typically assigned 3 cases at a time to be reviewed within 30 days, several panel physicians indicated that they received fewer cases, some receiving a total of only 7 or 8 during their first year as a panelist. Energy was slow to implement its case development operation. Initially, agency officials did not have a plan to hire a specific number of employees for case development, but they expected to hire additional staff as they were needed. When Energy first began developing cases, in the fall of 2002, the case development process had a staff of about 14 case managers and assistants. With modest staffing increases, the program quickly outgrew the office space used for this function. Though Energy officials acknowledged the need for more personnel by spring 2003, they delayed hiring until additional space could be secured, in August. As of August 2003, Energy had more than tripled the number of employees dedicated to case development to about 50, and Energy officials believe that they will now be able to achieve their goal of completing development of 100 cases a week that will be ready for physician panel review. Energy officials cited a substantial increase in the number of cases ready for physician panel review during October 2003, and reported preparing more than a hundred cases for panel review in the first week of November 2003. Energy shifted nearly $10 million from other Energy accounts into this program in fiscal year 2003, and plans to shift an additional $33 million into the program in fiscal year 2004, to quadruple its case-processing operation. With additional resources, Energy plans to complete the development of all pending cases as quickly as possible and have them ready for the physician panels. However, this would create a large backlog of cases awaiting review by physician panels. Because most claims filed so far are from workers whose medical conditions are likely to change over time, creation of such a backlog could further slow the decision process by making it necessary to update medical records before panel review. Even if additional resources allow Energy to speed initial case development, the limited pool of qualified physicians for panels will likely prevent significant improvements in processing time. Currently, approximately 100 physicians are assigned to panels of 3 physicians. In an effort to improve overall processing time, Energy has requested that NIOSH appoint an additional 500 physicians to staff the panels. NIOSH has indicated that the pool of physicians with the appropriate credentials and experience (including those already appointed) may be limited to about 200. Even if Energy were able to increase the number of panel physicians to 200, with each panel reviewing 3 cases a month, the panels would not be able to review more than 200 cases in any 30-day period, given current procedures. Thus, even with double the number of physicians currently serving on panels, it could take more than 7 years to process all cases pending as of June 30, 2003, without consideration of the hundreds of new cases the agency is receiving each month. Energy officials are exploring ways that the panel process could be made more efficient. For example, the agency is currently planning to establish permanent physician panels in Washington, DC. Physicians who are willing to serve full-time for a 2- or 3-week period would staff these panels. In addition, the agency is considering reducing the number of physicians serving on each panel—for example, initially using one physician to review a case, assigning a second physician only if the first reaches a negative determination, and assigning a third physician if needed to break a tie. Energy staff are currently evaluating whether such a change would require a change in their regulations. Agency officials have also recommended additional sources from which NIOSH might recruit qualified physicians and are exploring other potential sources. For example, the physicians in the military services might be used on a part-time basis. In addition, physicians from the Public Health Service serve on temporary full-time details as panel physicians. Panel physicians have also suggested methods to Energy for improving the efficiency of the panels. For example, some physicians have stated that more complete profiles of the types and locations of specific toxic substances at each facility would speed their ability to decide cases. In addition, one panel physician told us that one of the cases he reviewed received a negative determination because specific documentation of toxic substances at the worker’s location was lacking. While Energy officials reported that they have completed facility overviews for about half the major sites, specific data are available for only a few sites. Agency officials said that the scarcity of records related to toxic substances and a lack of sufficient resources constrain their ability to pursue building-by- building profiles for each facility. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For information regarding this testimony, please contact Robert E. Robertson, Director, or Andrew Sherrill, Assistant Director, Education, Workforce, and Income Security, at (202) 512-7215. Individuals making contributions to this testimony include Amy E. Buck, Melinda L. Cordero, Beverly Crawford, Patrick DiBattista, Corinna A. Nicolaou, Mary Nugent, and Rosemary Torres Lerma. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Energy (Energy) and its predecessor agencies and contractors have employed thousands of workers in the nuclear weapons production complex. Some employees were exposed to toxic substances, including radioactive and hazardous materials, during this work and many subsequently developed illnesses. Subtitle D of the Energy Employees Occupational Illness Compensation Program Act of 2000 allows Energy to help its contractor employees file state workers' compensation claims for illnesses determined by a panel of physicians to be caused by exposure to toxic substances in the course of employment at an Energy facility. Energy began accepting applications under this program in July 2001, but did not begin processing them until its final regulations became effective on September 13, 2002. The Congress mandated that GAO study the effectiveness of the benefit program under Subtitle D of this Act. This testimony is based on GAO's ongoing work on this issue and focuses on three key areas: (1) the number, status, and characteristics of claims filed with Energy; (2) the extent to which there will be a "willing payer" of workers' compensation benefits, that is, an insurer who--by order from, or agreement with, Energy--will not contest these claims; and (3) the extent to which Energy policies and procedures help employees file timely claims for these state benefits. As of June 30, 2003, Energy had completely processed only about 6 percent of the nearly 19,000 cases it had received. More than three-quarters of all cases were associated with facilities in nine states. Processing had not begun on over half of the cases and, of the remaining 40 percent of cases that were in processing, almost all were in the initial case development stage. While the majority of cases (86 percent) associated with major Energy facilities in nine states potentially have a willing payer of workers' compensation benefits, actual compensation is not certain. This figure is based primarily on the method of workers' compensation coverage used by Energy contractor employers and is not an estimate of the number of cases that will ultimately be paid. Since no claimants to date have received compensation as a result of their cases filed with Energy, there is no actual experience about how contractors and state programs treat such claims. Claimants have been delayed in filing for state worker's compensation benefits because of two bottlenecks in Energy's claims process. First, the case development process has not always produced sufficient cases to allow the panels of physicians who determine whether the worker's illness was caused by exposure to toxic substances to operate at full capacity. While additional resources may allow Energy to move sufficient cases through its case development process, the physician panel process will continue to be a second, more important, bottleneck. The number of panels, constrained by the scarcity of physicians qualified to serve on panels, will limit Energy's capacity to decide cases more quickly, using its current procedures. Energy officials are exploring ways that the panel process could be made more efficient.
The Army has divided nonstandard equipment into two broad categories:  Nontactical nonstandard equipment, which consists primarily of durable goods that are used to provide services for soldiers as well as foreign governments. This equipment includes but is not limited to fire trucks and ambulances, as well as equipment used for laundry and food service. Most of this equipment has been acquired through the Logistics Civil Augmentation Program (LOGCAP) and is managed and sustained by contractors under the LOGCAP contract (hereinafter referred to as contractor-managed, government-owned property).  Tactical nonstandard equipment, which is commercially acquired or nondevelopmental equipment that is rapidly acquired and fielded outside the normal Planning, Programming, Budgeting, and Execution System and acquisition processes, in order to bridge capability gaps and meet urgent warfighter needs. According to Army documents, as of March 2011, 36.5 percent of all Army equipment in Iraq was contractor-managed, government-owned property, with a value of approximately $2.5 billion. Furthermore, as of March 2011 an additional 10.7 percent of Army equipment in Iraq, valued at approximately $1.6 billion, was categorized as nonstandard equipment. According to Army officials, all equipment—standard and nonstandard— must be out of Iraq by December 31, 2011. We have reported on issues related to nonstandard equipment in Iraq in the past. In September 2008 we identified several issues that could affect the development of plans for reposturing U.S. forces from Iraq. One of those issues was that DOD, CENTCOM, and the military services had not clearly established roles and responsibilities for managing and executing the retrograde of standard and nonstandard equipment from Iraq. We also noted that data systems used during the retrograde process were incompatible, and although a fix for the data system incompatibility had been identified, it had not been implemented. As a result, we recommended that the Secretary of Defense, in consultation with CENTCOM and the military departments, take steps to clarify the chain of command over logistical operations in support of the retrograde effort. We also recommended that the Secretary of Defense, in consultation with the military departments, correct the incompatibility weaknesses in the various data systems used to maintain visibility over equipment and materiel while they are in transit. DOD partially concurred with our first recommendation, and took steps to clarify the chain of command over logistical operations in support of the retrograde effort. DOD fully concurred with our second recommendation, stating that it was actively assessing various data systems used to maintain visibility over equipment and materiel while in transit. Finally, though we made no recommendations on this issue, we noted that maintaining accountability for and managing the disposition of contractor-managed, government- owned property may present challenges to reposturing in Iraq. In February 2009, in testimony before the Committee on Armed Services of the House of Representatives, we addressed factors that DOD should consider as the United States refines its strategy for Iraq and plans to draw down forces. We then included a section on managing the redeployment of U.S. forces and equipment from Iraq in our March 2009 report on key issues for congressional oversight. In November 2009, in a statement before the Commission on Wartime Contracting in Iraq and Afghanistan, we presented some preliminary observations on DOD’s planning for the drawdown of U.S. forces from Iraq, and in April 2010 issued a report that highlighted actions needed to facilitate the efficient drawdown of U.S. forces and equipment from Iraq. In our April 2010 report, we noted that DOD had created new organizations to oversee, synchronize, and ensure unity of effort during the drawdown from Iraq, and had established goals and metrics for measuring progress. We also noted that, partly in response to our September 2008 report recommendations, representatives from the Secretary of Defense’s Lean Six Sigma office conducted six reviews to optimize theater logistics, one of which focused on the process for retrograding equipment from Iraq, including disposition instructions. Results from the Lean Six Sigma study influenced the development of a new data system—the Theater Provided Equipment Planner—which is intended to automate the issuance of disposition instructions for theater provided equipment. Complementing the Theater Provided Equipment Planner database was a second database—the Materiel Enterprise Non-Standard Equipment database—which catalogued all types of nonstandard equipment in Iraq in order to provide automated disposition. However, we also noted that officials in Iraq and Kuwait stated that, of all categories of equipment, they had the least visibility over contractor-managed, government-owned property, and that U.S. Army Central Command officials said they had low confidence in the accountability and visibility of nonstandard equipment. While these reports, testimonies, and statements focused primarily on plans, procedures, and processes within the CENTCOM area of responsibility, especially in Iraq and Kuwait, this report’s focus will be specifically on nonstandard equipment and MRAPs, and primarily on the plans, processes, and procedures that affect its disposition once it leaves the CENTCOM area of responsibility. MRAPs were first fielded in Iraq in May 2006 by the Marine Corps for use in western Iraq. A year later, the Secretary of Defense affirmed the MRAP program as DOD’s most important acquisition program. As of July 2011, DOD’s acquisition objective was 27,744 MRAPs; according to DOD officials, funding appropriated through fiscal year 2011 is sufficient to cover 27,740. The vast majority of these MRAPs were allocated to the Army for use in Iraq and, increasingly, in Afghanistan. According to Joint Program MRAP statistics, as of February 2011, MRAPs had been involved in approximately 3,000 improvised explosive device events, and have saved thousands of lives. We have also reported on MRAPs in the past. In October 2009, we reported positively on the quick action taken by the Secretary of Defense to declare the MRAP program DOD’s highest priority. However, we also noted as key challenges that long-term sustainment costs for MRAPs had not yet been projected and budgeted and that the services were still deciding how to incorporate MRAPs into their organizational structures. In November 2009, in a statement before the Commission on Wartime Contracting in Iraq and Afghanistan, we noted that although the Army had not yet finalized servicewide requirements for its MRAPs, it had designated Red River Army Depot as the depot that would repair MRAPs, and had issued a message directing the shipment of 200 MRAPs from Kuwait to Red River Army Depot as part of an MRAP Reset Repair Pilot Program. However, we also noted that as of October 2009, there were approximately 800 MRAPs in Kuwait awaiting transportation to the United States. In April 2010 we noted that the Army’s strategy for incorporating MRAPs into its ground vehicle fleet was still pending final approval. As part of the Iraqi drawdown effort, excess nonstandard equipment that is no longer needed in Iraq is either redistributed in the CENTCOM theater, disposed of, provided to other nations through foreign military sales, or packaged for retrograde to a variety of Defense Logistics Agency Distribution Depots or Sierra Army Depot in the United States. According to Army Materiel Command, the majority of the excess nontactical nonstandard equipment is sent to Sierra Army Depot. According to officials at Sierra Army Depot, as of April 2011 the depot had received a total of 22,507 pieces of nontactical nonstandard equipment worth over $114.9 million, and still has on hand approximately 13,200 items worth more than $75 million. Smaller items, which are stored in a warehouse, include such items as desktop computers, computer monitors, printers, laptop computers, handheld palm computers, distress beacons, night vision goggles, rifle scopes, laser sights, radios, and radio frequency amplifiers. Larger items, which are stored outside, include all- terrain vehicles, generators, tractors, fire suppression systems, large refrigerators, and light sets. Once the items are received at Sierra Army Depot, they are removed from their containers, inventoried, evaluated for serviceability, catalogued, and placed in the appropriate location in the warehouse or, if they are larger items, in the appropriate outside storage location. Simultaneously, once the items are catalogued, they are recorded in Sierra Army Depot’s property book for accountability. According to guidance issued by Headquarters, Department of the Army, Army Materiel Command is to provide Army Commands, Army Service Component Commands, and Army Direct Reporting Units access to the inventory of nontactical nonstandard equipment stored at depots such as Sierra Army Depot through the Materiel Enterprise Non-Standard Equipment database; the guidance also discusses use of the depot property book to view available nonstandard equipment. Using these means to view what is on hand at Sierra Army Depot, units can request items from Army Materiel Command, which will then process the request and coordinate for its shipment to the requesting unit. In January 2011, Army Materiel Command introduced another means by which units can requisition nontactical nonstandard equipment from Army Materiel Command. Called the “virtual mall,” this tool uses the Materiel Enterprise Non-Standard Equipment database as a means by which units can both view items at Sierra and other Army depots and request them for their use. According to Sierra Army Depot records, as of April 2011 it had shipped more than 7,600 individual pieces of nontactical nonstandard equipment to various Army organizations. The total value for these items exceeded $29 million. According to Sierra Army Depot officials, its single largest customer in terms of number of items shipped is U.S. Army Installation and Management Command (a Direct Reporting Unit), which, as of April 2011, had received almost 1,800 items of nontactical nonstandard equipment from the depot, including computers, computer monitors, radios, “jaws of life,” cameras, generators, metal detectors, and binoculars. All equipment shipped from Sierra Army Depot is in “as is” condition. Receiving units are responsible for shipping costs and for any sustainment funding. As shown in table 1 above, Army units are not the only organizations that can requisition excess nontactical nonstandard equipment. If an item of nontactical nonstandard equipment has not already been requisitioned by Army or other federal agencies, such as the Department of State, local and state governments may seek to acquire it through the National Association of State Agencies for Surplus Property (NASASP), which accesses it through the General Services Administration (GSA). United States Forces-Iraq makes its excess nontactical nonstandard equipment lists available to GSA and NASASP, which in turn share these lists with state and local governments. Moreover, DOD has facilitated and partially funded the placement of a GSA/NASASP liaison in Kuwait. This liaison enables state and local governments to make informed decisions about available nontactical nonstandard equipment and coordinates its cleaning, customs clearance, movement, and movement tracking. The only costs incurred by state and local governments for equipment they decide to accept are transportation costs, and DOD has offered GSA/NASASP access to the Defense Transportation System, which provides door-to-door delivery, pricing at the DOD rate, and seamless customs processing. Finally, periodically GSA and NASASP officials are invited to Sierra Army Depot to screen excess nontactical nonstandard equipment on site that they did not have an opportunity to screen in theater. According to Army documents, as of January 2011 local and state governments have claimed 20 items valued at over $398,000 from Iraq, and, as of April 2011, an additional 256 items valued at almost $6 million from Sierra Army Depot. These items include generators, forklifts, tool kits, bulldozers, light sets, and concrete mixers. As with Army units, excess nontactical nonstandard equipment is shipped in “as is” condition. Moreover, according to Army officials, some excess items, like generators, do not meet U.S. specifications and therefore require modification. Although Sierra Army Depot has been receiving nontactical nonstandard equipment from Iraq since November 2009, until recently the Army had no guidance as to how long that equipment should be stored before being either redistributed or disposed of. According to Army Materiel Command officials, the potential usefulness of much of the equipment stored at Sierra Army Depot will be lost if items just sit on the shelves. Moreover, Sierra Army Depot records indicate that, as of April 2011, 59 percent of the nontactical nonstandard equipment received at the depot since November 2009 was still in storage there, while approximately 34 percent was shipped to Army organizations for reuse—$18.7 million to Army installations and bases throughout the world, $6.9 million to the Sierra Army Depot, and $4.2 million to the U.S. Army Installation and Management Command. Of the remaining 7 percent, approximately $6 million was donated to state and local governments and $3.2 million was transferred to disposal. On April 27, 2011, Headquarters, Department of the Army, disseminated a message that updated its processes and procedures for the requisitioning of excess nonstandard equipment stored at selected Army Materiel Command depots. According to this message, the intent is to extend the use of that equipment where appropriate. The message also discusses the use of the “virtual mall” under the Materiel Enterprise Non- Standard Equipment database and Sierra Army Depot’s property book for units to view equipment. The message also states that the intent is that once an item is unserviceable or no longer operational, it can be disposed of through local Defense Logistics Agency Disposition Services. Moreover, the April 2011 message calls for the establishment of an executive forum to review and determine the final disposition of excess nonstandard equipment stored at Sierra Army Depot for more than 180 days that has not been identified for reuse. According to this message, this semiannual review is intended to enable the Army’s effort to apply due diligence in the final disposition of nonstandard equipment. In a follow-up to its April 27 message, Headquarters, Department of the Army, issued another message on June 2, 2011, that outlines the makeup of the executive forum, which met for the first time on June 18, 2011. Finally, although neither message states this explicitly, according to a senior official, once a decision is made by the executive committee to dispose of nontactical nonstandard equipment that has been at Sierra Army Depot for more than 180 days, similar instructions will be included in the Materiel Enterprise Non-Standard Equipment database to prevent items that have been determined not to have future value or serviceability from being shipped back to the United States. In this way unnecessary transportation costs will be avoided. According to Army documents, in 2004, the Vice Chief of Staff of the Army directed U.S. Army Training and Doctrine Command’s Army Capabilities and Integration Center to identify promising capabilities in use in the CENTCOM theater that, based on their performance, should quickly become enduring programs of record or acquisition programs. Originally called Spiral to the Army, this effort eventually evolved into the Army’s Capabilities Development for Rapid Transition (CDRT) process. The CDRT process enables the Army to identify capabilities, most of which involve tactical nonstandard equipment that has been rapidly fielded, that are performing well in the CENTCOM theater and then to assess whether the capability should be retained in the Army’s current and future force. Developed by the Army Capabilities and Integration Center and the Army G-3/5/7, the CDRT process involves the periodic nomination and evaluation of tactical nonstandard equipment in use in the CENTCOM theater by a CDRT community of interest. This community includes representatives from the Office of the Secretary of Defense, the Joint Staff, various combatant commands, Army commands, Army service component commands, and various Army centers, such as the Army’s armor center, infantry center, and signal center. At present, the CDRT community of interest convenes quarterly to evaluate nominated capabilities. To qualify as a candidate for consideration in the CDRT process, a piece of tactical nonstandard equipment must first be nominated for consideration and, in addition, must have been in use for at least 120 days and have undergone an operational assessment, among other qualifications. Once identified, a list of candidates for consideration is compiled by the Army Capabilities and Integration Center and the Army G-3/5/7 and then sent to the CDRT community of interest for assessment. Assessment of each item of equipment is performed through a scoring system based on survey responses from operational Army units. Based on the assessment, each piece of equipment is placed in one of three categories: Acquisition Program Candidate/Enduring, Sustain, or Terminate. Tactical nonstandard equipment placed in the “enduring” category is theater-proven equipment assessed as providing a capability applicable to the entire Army and to the future force; as such, it may become eligible to compete for funding in the Army’s base budget. Tactical nonstandard equipment placed in the “sustain” category is equipment assessed as filling a current operational need in the CENTCOM theater, but which is not applicable to the entire Army, useful to the future force, or not yet recommended as an enduring capability. Sustain category tactical nonstandard equipment is resourced through overseas contingency operations funding, and is not programmed into the Army’s base budget. Finally, tactical nonstandard equipment placed in the “terminate” category is equipment deemed to have been ineffective, or as obsolete, or as having not fulfilled its intended function, or as having no further utility beyond current use. Army policy states that tactical nonstandard equipment in this category is not to be allocated Department of the Army funding, although individual units may continue to sustain the equipment with unit funds. Through the CDRT process, the Army has been able to accelerate the normal process by which requirements and needs are developed, as outlined in the Joint Capabilities Integration and Development System. That is because tactical nonstandard equipment placed in the enduring category as a result of the CDRT process enters the Joint Capabilities Integration and Development System at a more advanced developmental stage, as opposed to entering the system from the start. Accordingly, the Army views the CDRT process as a key means for determining the future disposition of rapidly fielded capabilities. Although one of the tenets of the CDRT process is to assess rapidly developed capabilities equipped to deployed units and move those proven in combat to enduring status as quickly as possible, a significant majority of the tactical nonstandard equipment evaluated to date has been categorized as sustain category equipment to be used only in the CENTCOM theater and paid for with overseas contingency operations funds. As of January 2011, the CDRT community of interest had met 10 times and considered 497 capabilities, of which 13 were nonmaterial capabilities. As a result, 30 material and 10 nonmaterial capabilities were selected as enduring; and an additional 13 capabilities were merged into other programs. An example of an enduring category material capability involving tactical nonstandard equipment is the Boomerang Gunshot Detector, which is an antisniper detection system that detects gunfire and alerts soldiers to the shooter’s location. A further 116 material capabilities were terminated. An example of a capability that was terminated because the CDRT community of interest considered it obsolete is the Cupola Protective Ensemble, which is protective clothing worn over body armor to protect troops from the blast effects of improvised explosive devices. The remaining 328 capabilities, including for example the Combined Information Data Network Exchange, were placed in the sustain category. According to Army officials, this piece of tactical nonstandard equipment was placed in the sustain category because, although it works well in the CENTCOM theater, it would not be applicable elsewhere, as it is a database with intelligence information specific to that theater. Capabilities that are designated as sustain category items may be reviewed during future CDRT iterations to see if that decision is still valid, and selected excess equipment placed in this category and no longer required in theater is being warehoused by Army Materiel Command until called upon in the future. Army officials have also stated, however, that the majority of capabilities considered by the CDRT community of interest are placed in the sustain category because the Army has yet to make definitive and difficult decisions about whether it wants to keep them and cannot afford to sustain this equipment without overseas contingency operations appropriations. As we have previously recommended, DOD should shift certain contingency costs into the annual base budget to allow for prioritization and trade-offs among DOD’s needs and to enhance visibility in defense spending. The department concurred with this recommendation. The effectiveness of the Army’s CDRT process is also inhibited by the lack of a system to track, monitor, and manage this equipment, which, in turn, may be attributed to the absence of a single focal point with the appropriate authority to oversee the fielding and disposition of tactical nonstandard equipment. As stated above, to qualify as a candidate for consideration in the CDRT process, a piece of tactical nonstandard equipment must first be nominated. But without a system or entity responsible for tracking, monitoring, and managing all items of tactical nonstandard equipment in its inventory, some capabilities in the CENTCOM theater may not be nominated and, therefore, never considered by the CDRT community of interest. According to federal best practices reported in GAO’s Standards for Internal Control in the Federal Government, management is responsible for developing detailed policies, procedures, and practices to help program managers achieve desired results through effective stewardship of public resources. To this end, in March 2011 we reported that DOD lacks visibility over the full range of its urgent needs efforts—one of the methods though which tactical nonstandard equipment is obtained and fielded—including tracking the solutions developed in response to those needs. Additionally, we found that DOD does not have a senior-level focal point to lead the department’s efforts to fulfill validated urgent needs requirements. Accordingly, we recommended that DOD designate a focal point to lead the department’s urgent needs efforts and that DOD and its components, like the Army, develop processes and requirements to ensure tools and mechanisms are used to track, monitor, and manage the status of urgent needs. DOD concurred with our recommendation and stated that it would develop baseline policies that would guide the services’ own processes in tracking urgent needs and that the Director of the Joint Rapid Acquisition Cell would serve as the DOD focal point. In April 2010 the Vice Chief of Staff of the Army issued a memorandum calling for the development of a rapid acquisition/rapid equipping common operating picture and collaboration tool, as a means to increase the efficiency and transparency of Army urgent needs processes. As of April 2011, however, Army officials stated that the system directed by the Vice Chief of Staff had yet to be deployed due to a lack of agreement over information sharing and over who would be responsible for the system. Because Army officials have repeatedly stressed that they do not have visibility over the entire universe of tactical nonstandard equipment in the CENTCOM theater and consider only those capabilities that have been nominated, in the absence of a common operating picture and a single focal point responsible for tracking, monitoring, and managing Army tactical nonstandard equipment it is possible that a piece of nonstandard equipment may exist in the CENTCOM theater that is either more effective, less expensive, or both, than a comparable piece of equipment that has been considered by the CDRT community of interest. Moreover, without visibility over the universe of tactical nonstandard equipment, the Army cannot project reset and sustainment costs for this equipment, and ensure that equipment is only being funded to the extent needed to meet a continuing requirement. The Army has recently transitioned MRAPs from nonstandard to standard items of equipment and published detailed disposition plans outlining how the vehicles will be integrated into the Army’s force structure. These detailed disposition plans are outlined in the document Final Report, Army Capabilities Integration Center, Mine Resistant Ambush Protected Study II (final report), which was released on June 22, 2011. This final report followed an August 2010 U.S. Army Training and Doctrine Command study to determine the best means to integrate MRAPs into the overall Army force structure. The August 2010 study presented Army leaders with two courses of action. Although there were several similarities between the two—for instance, each called for the placement of approximately 1,700 MRAPs in training sets—there were also some substantial differences. Specifically, the first course of action called for the placement of the majority of the Army’s MRAPs, more than 10,600, into prepositioned stocks. The second course of action allocated almost 4,000 fewer MRAPs to prepositioned stocks, and placed more with Army units. The August 2010 study recommended adoption of the first course of action because, according to Army officials, it offered the most balanced distribution of MRAPs among prepositioned stocks, training sets, reserve sets, and unit sets. Furthermore, the August 2010 study stated that other benefits that would accrue from the first course of action include reduced installation infrastructure effects and lower military construction costs, lower operations and maintenance costs, and lower life-cycle costs. For example, the study estimated that over a 25-year period, the first course of action would accrue $2.093 billion in life-cycle costs, while the second course of action would accrue $2.548 billion in life-cycle costs (these costs do not include onetime costs, discussed below, for upgrading and standardizing MRAPs that are returned to the United States). According to Army officials, the savings would result from having more MRAPs in prepositioned stocks, which, in turn, require less maintenance. Finally, according to Army Training and Doctrine Command officials, the first course of action provided the Army better operational flexibility, because MRAPs would already be positioned in forward areas and would not have to be transported from the United States, while the approach would still maintain sufficient numbers of MRAPs for training. On December 16, 2010, U.S. Army Training and Doctrine Command presented the results of its August 2010 study to the Army Requirements and Resourcing Board, for decision. On April 20, 2011, Headquarters, Department of the Army, published an order to provide guidance to develop an execution plan for the retrograde, reset, and restationing of the MRAP fleet, with an end state being an MRAP fleet that is properly allocated and globally positioned to support the full range of Army operations. The order did not give any specifics regarding the allocation of MRAPs across the Army ground vehicle fleet, however. According to Army officials, these specifics would be provided by the final report, which was released on June 22, 2011. According to the final report, MRAPs will be allocated as shown in table 2. Although the specific allocation of MRAPs varies slightly from that recommended in the August 2010 study (for example, the course of action recommended in the August 2010 study allocated 970 MRAPs to reserve stocks instead of the 746 adopted by the final report), the reasons given in the final report for allocating the MRAPs across the fleet were essentially the same as proposed in the August 2010 study: to provide a balanced distribution of MRAPs between units and prepositioned stocks, to provide strategic depth and operational flexibility by placing the bulk of the MRAPs in prepositioned stocks, and to provide a pool of reserve stock MRAPs that could be used to sustain prepositioned stock sets and maintain unit MRAP readiness. In addition, as had the August 2010 study, the final report highlighted the expected life-cycle costs for MRAPs based on the chosen allocation. This figure, $2.086 billion over 25 years, is slightly lower than the figure estimated in the August 2010 study. Though both the August 2010 study and the final report state the estimated life-cycle costs for MRAPs over 25 years, neither estimate fully follows recommendations in DOD’s instruction on economic analysis and decisionmaking, Office of Management and Budget (OMB) guidance for conducting cost-benefit analyses, and GAO’s Cost Estimating and Assessment Guide. For example, all three sets of guidance recommend that costs be calculated in or adjusted to present value terms, yet both the August 2010 study and the final report present costs in constant fiscal year 2011 dollars. While constant dollars allow for the comparison of costs across years by controlling for inflation, present value analysis is also recommended when aggregating costs to account for the time value of money. As a result of not doing a present value analysis and not recognizing the time value of money, the timing of when the costs are expected to occur is not taken into account. According to DOD’s instruction for economic analysis and decisionmaking, “accounting for the time value of money is crucial to the conduct of an economic analysis.” Moreover, the August 2010 study and the final report present life-cycle costs in aggregate, yet OMB guidance regarding underlying assumptions suggests that key data and results, such as year-by-year estimates of benefits and costs, should be reported to promote independent analysis and review. DOD guidance suggests that the results of economic analysis, including all calculations and sources of data, should be documented down to the most basic inputs to provide an auditable and stand-alone document, and the GAO guide says that it is necessary to determine when expenditures will be made. Without a year-by-year breakout of the costs, decision makers have no insight on the pattern of expenditures, a perspective that could be important for future asset management and budgetary decisions. Moreover, a year-by-year breakout of estimated costs would facilitate independent analysis and review. Complicating the issue surrounding life-cycle costs for MRAPs is that neither the August 2010 study nor the final report indicates that the “known” life-cycle costs, as they are labeled, are not, in fact, the total life- cycle costs. According to Army officials, the costs depicted in both documents are differential costs, meaning that the only life-cycle costs that were used in the decision-making matrix were costs that would differ between the two courses of action. Conversely, costs associated with elements of each course of action that were the same were not included. For example, both courses of action delineated in the August 2010 study allocated 2,818 MRAPs to certain types of units (truck companies for convoy protection, for instance). According to Army officials, costs associated with these MRAPs were not included in the decision matrices depicted in either the August 2010 study or the final report, and nowhere in either report is this indicated. According to Army officials, the Army does not yet know the true total MRAP life-cycle costs, although the Army’s MRAP program management office is leading an effort to complete such an estimate no later than fiscal year 2015. Nevertheless, the fact that neither document states that the life-cycle costs presented in each are not total costs may be misleading for decision makers. It also raises the question of to what extent the Army considered the affordability of either alternative; the associated trade-offs in the sustainment of its current fleet of tactical and combat equipment; or offsets in future modernization procurement that might be necessary in its base budget to sustain the additional 18,259 vehicles, of which 4,727 will be assigned to units. Finally, although Army officials provided us with a copy of a sensitivity analysis, which all three sets of guidance recommend, neither the August 2010 study nor the final report indicates that a sensitivity or uncertainty analysis was done. According to DOD documents, as a joint program, MRAPs have been allocated, through July 2011, $44.1 billion in overseas contingency operations funding. The military departments consequently have not had to fully account for long-term budgetary aspects and will eventually face substantial operational support costs in their annual base budgets. Army officials have likewise expressed concern about the loss of overseas contingency operations funding for MRAPs once the vehicles become part of the Army’s enduring force structure. Specifically, they are concerned about the Army’s ability to fund operations and maintenance costs for MRAPs within the Army base budget and the funding trade-offs that might have to be made with other major acquisition programs. On May 25, 2010, the Under Secretary of Defense (Comptroller) issued budget submission guidance to the DOD components stating that costs for non-war-related upgrades or conversions, home station training costs, and the storage of MRAPs not active in combat operations must be included in base budget estimates for fiscal years 2012 to 2016, thereby compelling the services to begin planning for funding MRAPs. Specific upgrades include increased armor protection, enhanced suspensions, and the standardization and consolidation of the many MRAP variants. In response, the Army has allocated $142.9 million in its fiscal year 2012 base budget submission for the upgrade of 224 MRAPs at Red River Army Depot and, all told, has planned to budget for the upgrade of 3,616 MRAPs for fiscal years 2012 through 2016, at a cost of $1.6 billion. However, the Army has not allocated funding for home station training or MRAP storage over the same period. According to the Army’s Tactical Wheeled Vehicle Strategy, one of the references used to inform the final report, it is important that the Office of the Secretary of Defense and the executive and legislative branches are kept informed of the Army’s needs to support its given missions and of any risks it foresees, so that thoughtful funding decisions can be made. In addition, this strategy states that the availability of adequate funding poses significant risks and that, if funding is lower than forecasted, the Army will be required to make difficult trade-offs that would, in turn, create increased operational risks. Moreover, in its April 20, 2011 order, Headquarters, Department of the Army, noted that one of the objectives of the order was to direct Planning, Programming, Budgeting, and Execution to ensure necessary action to identify and validate requirements used to inform future programming development. However, given the limitations to the cost estimates of both the August 2010 MRAP study and the final report on MRAPs, and the fact that the total cost estimates for the Army MRAP program are not yet complete, it is difficult to see how Planning, Programming, Budgeting, and Execution can be accomplished. Although the Army has plans and processes for the disposition of its nontactical and tactical nonstandard equipment, challenges remain that, if left unresolved, could affect plans for the eventual drawdown of U.S. forces from Iraq as well as Afghanistan. Specifically, without greater oversight over the universe of tactical nonstandard equipment currently being employed in Iraq and without a single focal point responsible for maintaining oversight of this equipment, there is a potential that some tactical nonstandard equipment that has been effective will be overlooked, and the Army could potentially forfeit opportunities for cost- saving efficiency and for ensuring that servicemembers are provided the most effective combat system. In addition, because the Army has categorized the vast majority of the tactical nonstandard equipment that it has considered as equipment that will continue to be funded with overseas contingency operations funds, it has not had to make the hard decisions about finding money for these programs in its base budget. Yet the Army cannot afford to sustain this equipment without overseas contingency operations funds, and continuing to fund these items in this manner places a strain on the Army budget that is not transparent. Finally, future costs associated with MRAPs will remain uncertain without a thorough analysis of those costs based on DOD, OMB, and GAO best practices and the completion of a true total cost estimate. Moreover, without the disclosure of the complete set of costs associated with MRAPs, the Army, the Office of the Secretary of Defense, and congressional decision makers will be unable to ascertain the long-term budgetary effects of the program, which is critical information in a time when competing programs are vying for finite and increasingly constrained funding. To facilitate the Army’s ability to efficiently evaluate, integrate, and provide for the disposition of its nonstandard equipment being retrograded from Iraq, and supply DOD decision makers and Congress with accurate estimates of the future costs of these systems, we recommend that the Secretary of Defense direct the Secretary of the Army to take the following three actions: finalize decisions about the future status of tactical nonstandard equipment, fund those items deemed as enduring capabilities in the Army base budget if applicable, and provide Congress with its plans for and estimates on future funding for or costs associated with any equipment the Army will continue to use in theater that will not become enduring capabilities;  designate a senior-level focal point within the Department of the Army with the appropriate authority and resources to manage the service’s effort in overseeing the disposition of its tactical nonstandard equipment to include the implementation of a servicewide means to track, monitor, and manage this equipment; and  undertake a thorough total life-cycle cost estimate for integrating MRAPs into its ground vehicle fleet in accordance with DOD, OMB, and GAO guidance and include costs for training, upgrades, standardization, and military construction and  use this estimate to assess the affordability of its current plans and make adjustments to those plans if warranted; and  provide the total life-cycle cost for integrating MRAPs into its ground vehicle fleet to Congress. In written comments on a draft of this report, DOD partially concurred with our first recommendation, did not concur with our second recommendation, and concurred with our third recommendation. These comments are included in appendix II. In addition, DOD provided technical comments that were incorporated, as appropriate. In response to our first recommendation that the Secretary of Defense direct the Secretary of the Army to finalize decisions about the future status of tactical nonstandard equipment, fund those items deemed as enduring capabilities in the Army base budget if applicable, and provide Congress with its plans for and estimates on future funding for or costs associated with any equipment the Army will continue to use in theater that will not become enduring capabilities, DOD partially concurred. In its response, DOD stated that the Capabilities Development for Rapid Transition (CDRT) process identifies enduring capabilities as Army Program Candidates and that the CDRT meets quarterly and provides recommendations to the DOD Joint Capabilities Development System, the Army Requirements Oversight Council, or the Joint Requirements Oversight Council depending on the acquisition strategy. DOD also stated that program managers and appropriate Army personnel then compete selected programs in the Program Operating Memoranda Joint Capabilities Assessment to secure funding and for inclusion in the President’s Budget Submission. Finally, DOD stated that the Army will provide the recommended report regarding any equipment the Army will continue to sustain in theater after Army forces return from Iraq. We support DOD’s rendering of a report to Congress outlining the equipment that it will continue to sustain in theater with overseas contingency operations funds. We also recognize that the CDRT process has resulted in a recommendation that certain equipment become programs of record and, as such, compete for funding in the Army’s base budget. However, as we reported, of the 484 material capabilities considered by the CDRT process as of January 2011, only 30, including Armored Security Vehicles and One-System Remote Video Terminals, have received such a recommendation while 328 material capabilities considered by CDRT were still being maintained by overseas contingency operations funds. Army officials familiar with the CDRT process have stated that the Army has yet to make definitive and difficult decisions about the majority of the material capabilities considered by CDRT and it cannot afford to sustain this equipment without overseas contingency operations funds. However, in order for the department to plan for and Congress to be informed of the future cost effect of sustaining new items of equipment after the end of overseas contingency operations funding, we continue to believe that the Army should eliminate this unknown by finalizing decisions about the future status of its tactical nonstandard equipment. DOD did not concur with our recommendation that the Secretary of Defense direct the Secretary of the Army to designate a senior-level focal point within the Department of the Army with the appropriate authority and resources to manage the service’s effort in overseeing the disposition of its tactical nonstandard equipment to include the implementation of a servicewide means to track, monitor, and manage this equipment. In its response, DOD stated that our recommendation does not account for the complexity covering requirements determination and approval, combat development, materiel development, management, and sustainment. In addition, DOD’s response stated that the Army used the same processes for managing nonstandard equipment as it does to manage standard equipment and highlighted the responsibilities of the Army G-3/5/7, G-8, G-4, and Assistant Secretary of the Army for Acquisition, Logistics, and Technology with regard to nonstandard equipment. Moreover, in its response DOD maintained that the Army has visibility of the nonstandard equipment in theater and has undertaken extensive efforts to ensure all nonstandard equipment is brought to record and accounted for, and that the Army staff and the Life Cycle Management Commands review nonstandard equipment on a recurring basis to determine its disposition. In summation, DOD’s position is that the Army does not believe it advisable to treat tactical nonstandard equipment different from nontactical nonstandard equipment or standard equipment. However, as the report points out, the Army already does treat tactical nonstandard equipment differently than nontactical nonstandard equipment and standard equipment, a fact underscored by the existence of the CDRT process, which is applicable only to tactical nonstandard equipment and not to any other types of equipment. In addition, Army officials repeatedly stressed to us that they do not have visibility over the universe of tactical nonstandard equipment in the CENTCOM theater. Army officials also told us that, despite an April 2010 memorandum from the Vice Chief of Staff of the Army calling for the development of a common operating picture and collaboration tool as a means to increase efficiency and transparency of Army urgent needs processes by which tactical nonstandard equipment is acquired, as of April 2011 one had yet to be fielded due to a lack of agreement over information sharing and over who would be responsible for the system. Moreover, in March 2011, DOD concurred with our recommendation that the department appoint a senior-level focal point to lead its urgent needs efforts and that its components, like the Army, develop processes and requirements to ensure tools and mechanisms are used to track, monitor, and manage the status of urgent needs. On the basis of the above, we continue to believe that like DOD, the Army should designate a senior-level focal point with the appropriate authority and resources to manage the service’s efforts in overseeing the disposition of its tactical nonstandard equipment to include the implementation of a servicewide means to track, monitor, and manage this equipment. DOD concurred with our third recommendation that the Secretary of Defense direct the Secretary of the Army to undertake a thorough total life-cycle cost estimate for integrating MRAPs into its ground vehicle fleet in accordance with DOD, OMB, and GAO guidance and include costs for training, upgrades, standardization, and military construction; that the Army use this estimate to assess the affordability of its current plans and make adjustments to those plans if warranted; and that the Army provide the total life-cycle cost for integrating MRAPs into its ground vehicle fleet to Congress. DOD commented that the Army staff, in conjunction with the Joint Program Office, is now conducting a Sustainment Readiness Review that addresses issues of total life-cycle costs for MRAPs, and that it will continue to refine its estimates to determine total life-cycle costs, which will inform future budget decisions as the Army continues to reset its force. We believe that if the Army’s total life-cycle cost estimate is conducted in accordance with DOD, OMB, and GAO guidance and used to develop an affordable plan for integrating MRAPs into its vehicle fleet as well as to provide Congress with a total life-cycle cost of its plan, its actions will be responsive to our recommendations. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Secretary of the Army. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions on the matters discussed in this report, please contact me at (202) 512-8365 or solisw@gao.gov. Contact points for our Offices of Congressional relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To determine the extent to which the Army has plans and processes for the disposition of nontactical nonstandard equipment no longer needed in Iraq, we reviewed and analyzed relevant documents, including various Army messages that address the procedures for requisitioning retrograded nonstandard equipment from Iraq. In addition, we interviewed Army officials at relevant organizations throughout the chain of command and at several different organizations. We also reviewed Army Materiel Command briefings regarding the Materiel Enterprise Non-Standard Equipment database and Virtual Mall demonstrations and spoke with officials involved with the National Association of State Agencies for Surplus Property program. Furthermore, we also conducted a site visit to Sierra Army Depot, where the vast bulk of the Army’s nontactical nonstandard equipment is shipped once it leaves Iraq, to view procedures and processes there for the evaluation, disposition, storage, and integration of nontactical nonstandard equipment. We also drew from our body of previously issued work related to nonstandard equipment to include various Iraq drawdown-related issues to identify areas where the Department of Defense (DOD) could make improvements in executing and managing the retrograde of standard and nonstandard equipment from Iraq. To determine the extent to which the Army has plans and processes for the disposition of tactical nonstandard equipment no longer needed in Iraq, we reviewed and analyzed relevant documents, including Army plans, messages, guidance, regulations, and briefings that addressed the subject. We also reviewed Army Audit Agency reports that specifically address the Capabilities Development for Rapid Transition process as well as the sustainment of tactical nonstandard equipment. In addition, we interviewed Army officials at several relevant organizations throughout the chain of command and made a site visit to Fort Monroe, Virginia, where we interviewed officials from U.S. Army Training and Doctrine Command and from the Army Capabilities and Integration Center, both of which play leading roles in determining the ultimate disposition of tactical nonstandard equipment. We also interviewed officials from the Joint Improvised Explosive Device Defeat Organization to discuss the interface between that organization and the Army’s processes for integrating tactical nonstandard equipment into its inventory. Finally, we drew from our body of previously issued work examining DOD’s urgent needs processes and the need for DOD to obtain visibility over these efforts. To determine the extent to which the Army has plans and processes for the disposition of Mine Resistant Ambush Protected vehicles (MRAP) no longer needed in Iraq, we reviewed and analyzed relevant documents, including Army plans, messages, guidance, and briefings that addressed the subject. In particular, we reviewed the Army’s MRAP disposition plans included in the Final Report, Army Capabilities and Integration Center, Mine Resistant Ambush Protected Study II, and also considered in our analysis the Army’s Tactical Wheeled Vehicle Strategy. We also analyzed Army cost estimates for integrating MRAPs into its ground vehicle fleet and compared these estimates with DOD’s instruction for economic analysis, the Office of Management and Budget’s guidance for conducting cost-benefit analyses, and GAO’s Cost Estimating and Assessment Guide. We interviewed relevant officials with direct knowledge of the Army’s future plans for its MRAPs throughout the chain of command to include officials from the Army’s budget office and Red River Army Depot, where MRAPs will be shipped once they are no longer needed in Iraq or Afghanistan. Moreover, we made a site visit to Fort Monroe, Virginia, where we interviewed officials from U.S. Army Training and Doctrine Command and from the Army Capabilities and Integration Center, both of which were tasked to complete the MRAP Study II Final Report; and since the MRAP program is currently a joint program under U.S. Marine Corps lead, we also interviewed officials from the MRAP Joint Program Office. Finally, we also drew from our body of previously issued work regarding MRAPs to include the rapid acquisition of these vehicles as well as the challenges the services have faced with incorporating MRAPs into their organizational structures. In addition to the contact named above, individuals who made key contributions to this report include Larry Junek, Assistant Director; Nick Benne; Stephen Donahue; Guy LoFaro; Emily Norman; Charles Perdue; Carol Petersen; Michael Shaughnessy; Maria Storts; and Cheryl Weissman.
As of March 2011, the Army had over $4 billion worth of nonstandard equipment in Iraq--that is equipment not included on units' standard list of authorized equipment. Concurrently, the Department of Defense (DOD) has acquired over $44 billion worth of Mine Resistant Ambush Protected vehicles (MRAP), most of which have been allocated to the Army. This equipment must be withdrawn from Iraq by December 31, 2011. GAO examined the extent to which the Army has plans and processes for the disposition of (1) nontactical nonstandard equipment; (2) tactical nonstandard equipment; and (3) MRAPs that are no longer needed in Iraq. In performing this review, GAO analyzed relevant documents, interviewed Army officials, and visited Sierra Army Depot, where most nontactical nonstandard equipment is shipped once it leaves Iraq. The Army has plans and processes for the disposition of nontactical nonstandard equipment (e.g., durable goods that are used to provide services for soldiers), and recently created a policy regarding the length of storage time. Excess nontactical nonstandard equipment is either redistributed in the U.S Central Command theater, disposed of, provided to other nations through foreign military sales or other means, or shipped to depots in the United States. In April 2011, the Army issued two messages that updated its procedures for requisitioning excess nonstandard equipment stored at Sierra Army Depot and created a forum to determine its final disposition instructions. The intent was also to extend use of this equipment by making it available to Army units; when an item is deemed not operational, to dispose of it in theater; and to enter these instructions in a disposition database so they will no longer be shipped back to the United States. The Army would then avoid unnecessary transportation costs. The Army has not made disposition decisions for most of its tactical nonstandard equipment (i.e., commercially acquired or non-developmental equipment rapidly acquired and fielded outside the normal budgeting and acquisition process), and its disposition process is impaired by a lack of visibility over this equipment and the absence of a focal point to manage this equipment. The Capabilities Development for Rapid Transition process enables the Army to assess tactical nonstandard equipment already in use in the U.S. Central Command theater and determine whether it should be retained for the Army's current and future force and subsequently funded in the Army's base budget. However, the decision about most of the equipment considered by the process is to continue to fund it with overseas contingency operations funds. In addition, the Army has no system to track, monitor, and manage its inventory of tactical nonstandard equipment and has no single focal point to oversee this equipment. Best practices as cited in GAO's Standards for Internal Control in the Federal Government call for effective stewardship of resources by developing detailed policies, procedures, and practices. Although the Army has plans for the disposition of its MRAP fleet, its cost estimates are incomplete and do not follow cost-estimating best practices. The Army conducted a study to effectively guide its integration of MRAPs into its force structure. The selected option placed the majority of MRAPs in prepositioned stocks. However, this study did not incorporate analyses of future costs based on Department of Defense, Office of Management and Budget, and GAO cost-estimating guidance providing best practices; nor did it delineate total costs for sustainment of its MRAP fleet or when those costs would be incurred. Without such information, decision makers lack the perspective necessary to make asset-management and budgetary decisions. Although Army officials stated that they are working toward providing an estimate of future MRAP costs, this has not yet been completed. GAO recommends that the Secretary of Defense direct Army authorities to (1) finalize decisions about the future status of tactical nonstandard equipment; (2) designate a focal point to oversee this equipment; and (3) undertake a thorough life-cycle cost estimate for its MRAPs. DOD concurred with our third recommendation, partially concurred with our first, and did not concur with the second. Given DOD's lack of visibility over tactical nonstandard equipment, GAO continues to believe a focal point is needed.
According to a 1995 assessment by the IPCC, climate models project that increasing atmospheric concentrations of the primary greenhouse gases—carbon dioxide, methane, and nitrous oxide—and aerosols will raise the average global surface temperature between 1.8 and 6.3 degrees Fahrenheit by 2100. The IPCC estimates that such a temperature increase could lead to many potential impacts, including flooding, droughts, changes in crop yields, and changes in ecosystems. In an effort to address concerns about the possibility of global climate change, the United States and other countries signed the United Nations Framework Convention on Climate Change at the Rio Earth Summit in May 1992. As of June 1996, 159 countries had ratified the Convention. The Convention’s ultimate objective is to stabilize the concentrations of human-induced greenhouse gases in the atmosphere at a level that would prevent dangerous interference with the climate system. To accomplish this objective, the Convention directs the Annex I parties to adopt policies and measures to limit greenhouse gases and to protect and enhance the greenhouse gas sinks and reservoirs that absorb and store carbon dioxide from the atmosphere. The Convention also directs the Annex I parties to submit plans to the Conference of the Parties with detailed information on the policies and measures that will help return net greenhouse gas emissions to 1990 levels by 2000. (See app. I for more details on the Convention and a list of Annex I countries.) As of May 1996, 33 of the 36 countries listed under Annex I had ratified the Convention. At the first session of the Conference of the Parties to the Convention held in April 1995, the countries acknowledged that the existing commitments under the Convention are not adequate to meet the overall objective of stabilizing greenhouse gas concentrations. This determination was formally designated by the Conference of the Parties as the Berlin Mandate. To address the inadequacies, the parties agreed to begin a process to define actions in the post-2000 period, including strengthening the commitments of the parties included in Annex I by elaborating policies and measures, as well as by setting quantified objectives for limiting and reducing emissions. The Department of State’s Under Secretary for Global Affairs recently announced to the Conference of the Parties that the United States supports the adoption of binding emissions targets beyond 2000. The process of determining actions beyond 2000, designed to include in its early stages an analysis and assessment phase, is scheduled for completion before the third Conference of the Parties, currently set for late 1997. Carbon dioxide is considered the major contributor to global warming. Developed countries—as identified by their membership in the Organization for Economic Cooperation and Development (OECD)—accounted for about half of the world’s energy-related carbon dioxide emissions in 1990. The United States was responsible for about 22 percent of the total carbon dioxide emissions. Developing countries are projected to account for an increasing share of worldwide carbon dioxide emissions in the future as a result of their increasing growth in energy demand. For example, the Energy Information Administration estimates that China’s share of carbon dioxide emissions will almost double from about 10 percent in 1990 to about 19 percent in 2015. Therefore, even if the developed countries are able to stabilize carbon dioxide emissions, worldwide emissions are likely to increase because of the expected large growth in developing countries. The incomplete, unreliable, and inconsistent data on emissions prevent a complete assessment of Annex I countries’ efforts to limit greenhouse gas emissions to 1990 levels by 2000. These problems occurred for several reasons, including a lack of specific reporting requirements by the Convention. As of February 1996, the Convention had compiled data from the national plans of 29 Annex I countries. These countries accounted for 60 percent of the estimated global carbon dioxide emissions from fossil fuel combustion in 1990. All 29 countries reported 1990 data on carbon dioxide, and 28 of the 29 reported similar data on methane and nitrous oxide. However, eight countries did not provide projections to 2000 for at least one of those gases. For example, Spain did not include projections of either methane or nitrous oxide in its plan. Additionally, only eight countries provided projections of the other greenhouse gases also covered under the Convention, such as hydrofluorocarbons. While emissions for such gases are now small, they are projected to increase in the future. Also, some reported data lack precision. Specifically, although countries provided emissions data for methane and nitrous oxide, the level of certainty in such data is low. For example, the uncertainty range for reported methane emissions in Canada’s national plan is plus or minus 30 percent at a 90-percent confidence level; the range is plus or minus 40 percent for nitrous oxide at an 85-percent confidence level. In contrast, the uncertainty level for Canada’s carbon dioxide emissions was only plus or minus 4 percent at a 95-percent confidence level. Reliability in measuring the emissions data for methane and nitrous oxide is not as high as for carbon dioxide. Because these gases come from many sources and are nontoxic, little effort has been given to measuring their emissions. Additionally, the countries’ emissions data were not always consistent. For example, some Annex I countries adjusted their 1990 inventory levels in order to develop what they believed to be a more reasonable starting point for projections to 2000. As a result, a different picture emerges of a country’s ability to meet the goal, depending on whether the projections are compared to actual or adjusted 1990 levels. To illustrate, Denmark adjusted its 1990 inventory level upward to show what emissions would have been if imported hydroelectric power had been generated domestically with fossil fuels. Consequently, Denmark’s carbon dioxide projections exceed the actual 1990 levels; but when the adjusted level for 1990 is used, the projections for 2000 are below the 1990 level. Two major factors contributed to problems in the Annex I countries’ reporting of emissions data. First, the parties to the Convention did not formally adopt reporting guidelines until April 1995—after most countries had submitted their national plans—and the guidelines adopted in 1995 were not specific in all cases. For example, the guidelines did not specify whether emissions’ projections were to be reported on the basis of gross emissions or net emissions, which account for the carbon dioxide removed from the atmosphere by forests and other greenhouse gas sinks. Only 13 of 29 Annex I countries separately reported projections of carbon dioxide sinks. The parties to the Convention have recognized shortcomings in the guidance. In its comments on a draft of this report, the Department of State noted that the parties had adopted revised reporting guidelines at their Second Conference in July 1996. These revised guidelines will be used for the second round of national plans due to be submitted in April 1997. The Department of State has stated that these national plans will be significantly improved because of the revised guidelines. Furthermore, it expects that the Conference of the Parties will continue to revise and improve the guidelines. The other major factor contributing to problems with the data on greenhouse gas emissions is that, as previously noted, the countries have not yet been able to quantify with certainty the emissions of methane and nitrous oxide because of the limited reporting data. Although the currently available emissions data prevent a complete assessment of countries’ progress in meeting the Convention’s goal, projections by energy forecasting agencies of carbon dioxide emissions from fossil fuel use—which is the largest single category of greenhouse gas emissions—indicate that few Annex I countries will likely be able to return emissions to 1990 levels by 2000. Of the major developed countries, only Germany and the United Kingdom appear likely to reduce carbon emissions to 1990 levels by the year 2000. Other major developed countries—including Canada, Italy, Japan, and the United States—will probably not reach the goal. A few other Annex I countries in eastern Europe, such as the Czech Republic, may be able to meet the Convention’s goal. The projections by the Annex I countries themselves indicate that only 7 of the 24 countries that provided point estimates of carbon dioxide emissions in 2000 project that they can hold emissions near or below 1990 levels. (See table 1.) For the remaining countries, the increases over the 1990 inventory levels ranged from 1.7 percent to 28.8 percent. The projections from other organizations also indicate that few countries will be able to stabilize carbon dioxide emissions. For example, the Energy Information Administration’s May 1996 International Energy Outlook forecasts that carbon dioxide emissions from energy consumption will increase for most of the Annex I countries from 1990 to 2000. Specifically, the agency projects that carbon dioxide emissions will increase 11 percent in the United States, 21 percent in Japan, 18 percent in Canada, and 6 percent in OECD Europe. The International Energy Agency (IEA) also projects increases in carbon dioxide emissions between 1990 and 2000 for Annex I countries. In its 1994 Review of Energy Policies of IEA Countries, published in July 1995, this agency forecasts increases in energy-related carbon dioxide of 10 percent for the United States, 13 percent for Canada, and 8 percent for Europe. On the basis of our review of six developed countries—Canada, Germany, Italy, Japan, the United Kingdom, and the United States—we found that energy use is the major factor affecting the ability of those countries to meet the goal of returning greenhouse gas emissions to 1990 levels by 2000. Therefore, the major factors that affect trends in energy use—such as growth in gross domestic product (GDP), population growth, energy prices, and energy efficiency—also affect trends in greenhouse gas emissions. The ability to shift from coal, the burning of which produces a high level of greenhouse gases, to other fuels is also a major factor. Table 2 provides information on these factors for the six countries we reviewed. (App. II provides additional information on the goals of the six countries in connection with climate change and the status of the additional actions that those countries are considering to help reach the Convention’s goal.) In response to the Convention’s goal on greenhouse gases, the United States issued its Climate Change Action Plan (CCAP) in October 1993. The plan includes 44 largely voluntary initiatives designed to return net emissions of the major greenhouse gases—carbon dioxide, methane, nitrous oxides, and hydrofluorocarbons—to 1990 levels by 2000. The CCAP aimed to cut the net projected growth of 7 percent in the major greenhouse gas emissions between 1990 and 2000 and to achieve stabilization at the 1990 level of 1,462 million metric tons of carbon equivalent (MMTCE). Without the plan’s initiatives, emissions were projected to grow to 1,568 MMTCE. The CCAP laid the foundation for the U.S. national plan submitted to the Convention in September 1994. The United States estimates that it will fall short of its target. Efforts to reduce greenhouse gas emissions in the United States have been hampered by changes in forecasts of key economic variables, such as higher-than-projected economic growth and lower-than-expected energy prices, that differ from the assumptions made in the CCAP. The changes in these economic indicators tend to increase energy use and therefore also increase greenhouse gas emissions. For example, the world oil price per barrel in 2000 was estimated to be $24.04 (1994 dollars) in the CCAP, but the Energy Information Administration’s 1996 Annual Energy Outlook—which contains the executive branch’s latest forecasts—now estimates that the price will be $19.27 per barrel (1994 dollars). Also, annual population growth is now projected to be higher than expected when the CCAP was formulated—about 1.0 percent per year as compared with the 0.7 percent projected in 1993. Population growth tends to increase energy use and consequently greenhouse gas emissions. (App. III compares in more detail the changes in key economic factors and fuel prices affecting the U.S. efforts.) Officials at the Department of Energy and the Environmental Protection Agency—which are responsible for implementing the bulk of the CCAP actions—noted that the reductions in the funding for the plan also have a substantial negative effect on the United States’ ability to reduce greenhouse gas emissions by 2000 by limiting the agencies’ ability to implement voluntary initiatives in the plan. For example, in fiscal year 1996, only about one-half of the requested funds were appropriated. Table 3 provides annual budget requests and appropriations for fiscal years 1995 through 1997. Lower estimated prices will, in general, also make the implementation of voluntary initiatives less likely. According to an official with the Council on Environmental Quality, legislation has also precluded the implementation of the few nonvoluntary actions in the plan, such as requiring that tires be labeled for fuel economy. The Council on Environmental Quality, the Department of Energy, the Department of State, and the Environmental Protection Agency are currently revising the CCAP. A new plan is scheduled to be issued in the fall of 1996. Canada’s national plan relies primarily on a set of voluntary measures aimed at increasing energy efficiency and conservation and encouraging a switch to less carbon dioxide-intensive energy sources. Because of Canada’s high energy-intensity, most of its human-induced greenhouse gas emissions are generated by the demand for energy to heat and light homes, operate industries, and other uses. Factors such as low population density, large distances between urban areas, and a cold climate create unique circumstances that make Canada a highly energy-intensive country. A recent estimate indicates that Canada will likely miss the Convention goal by a significant amount—carbon dioxide emissions are estimated to increase by 18 percent by the Energy Information Administration. The country’s high rate of energy intensity, low energy prices, and fast-growing population, among other factors, have contributed to the gap. Japan is also likely to miss the Convention’s goal. Japan’s Action Report on Climate Change, issued in 1994, estimated that total carbon dioxide emissions in 2000 would exceed their 1990 levels. Current projections by the Energy Information Administration indicate that carbon dioxide emissions in Japan may increase by 21 percent. Over the last 20 years, Japan has consistently consumed one of the lowest percentages of energy per dollar of economic output for developed countries because of energy efficiency programs and initiatives. Therefore, achieving additional greenhouse gas reductions is difficult. As a result, even low levels of growth in the economy and population increase energy use and greenhouse gas emissions. Additionally, Japan had planned to build several additional nuclear power plants that would emit fewer greenhouse gases than the coal-powered facilities they would replace. However, the country has encountered difficulties in siting and building those plants. Italy also is low in energy-intensity as compared to other major developed countries. According to a State Department official, Italy is more energy efficient than other developed countries because of high energy prices and regulations limiting energy use. Therefore, additional energy savings and greenhouse gas reductions may be difficult to achieve, although Italy is forecast to experience a relatively low rate of economic growth. Italy’s national plan discusses additional measures to further reduce carbon dioxide emissions, but their impact may be minimal. In its national plan, Italy projects that its carbon dioxide emissions in 2000 will exceed 1990 emissions by about 12.5 percent without additional measures. Germany and the United Kingdom, the only two major developed countries positioned to meet the Convention’s goal, are also subject to economic factors that can cause energy use to increase. However, as the result of unique circumstances set in motion before the Convention’s goal was established, both Germany and the United Kingdom are likely to meet the goal. According to an official in Germany’s Ministry of the Environment, the principal reason that Germany is expected to exceed the Convention’s goal is the reunification of the former East Germany with West Germany in 1990. The depressed economic conditions in East Germany, including low productivity levels and high unemployment, and the shift from inefficient coal technology to natural gas are helping to reduce greenhouse gas emissions significantly. For instance, carbon dioxide emissions in the former East Germany have already decreased by about 43 percent from 1990 to 1994. In contrast, during the same period, carbon dioxide emissions increased about 3 percent in what was formerly West Germany. In its national plan, Germany has also sought to achieve the Convention’s goal by implementing a broad range of voluntary and regulatory measures aimed at reducing greenhouse gas emissions. Progress in the United Kingdom is largely attributable to the privatization of its energy utilities over the last decade, which is bringing about a significant switch from coal to natural gas, the fossil fuel that produces the lowest level of carbon dioxide emissions per unit of energy consumed. To illustrate, the Energy Information Administration has estimated that natural gas as a percentage of total energy consumption will increase in the United Kingdom from 23 percent in 1990 to 35 percent in 2000. The United Kingdom has also increased its taxes on energy use, which it believes will also help to reduce greenhouse gas emissions. United Kingdom officials now estimate that carbon dioxide emissions in 2000 will be about 4 percent to 8 percent below 1990 emissions. The ability to assess countries’ individual and relative efforts in reducing greenhouse gas emissions depends greatly on the countries’ reporting of complete, reliable, and consistent emissions data. However, some of the national plans submitted by Annex I countries have not provided such data. Consequently, a complete assessment cannot be made of whether these countries will meet the Convention’s goal of reducing all greenhouse gas emissions to 1990 levels by 2000. The recent adoption of revised reporting standards should improve the ability to assess progress against the current Convention goal. Negotiations are already under way aimed at reaching agreement on new, binding emissions targets past 2000 for these same countries. Reporting guidelines designed to help ensure that complete, reliable, and complete emissions data are provided by countries will also be an essential element of any new agreement. We recommend to the Secretary of State that, as part of ongoing international negotiations, the United States urge that reporting standards be formulated and adopted for any new targets beyond 2000 in order to enhance the completeness, reliability, and consistency of emissions data. We provided a draft of our report to the Department of State, the Council on Environmental Quality, the Department of Energy, and the Environmental Protection Agency for their review and comment. The Department of State commented that our report provides an accurate assessment of the progress of countries in reducing greenhouse gas emissions. The Department also agreed with our recommendation and noted that revisions had recently been made to the reporting guidelines that will lead to improved national plans. We updated our report to reflect that recent development. The Department also provided several additional comments, and we have revised the report as appropriate. (See app. IV for the Department of State’s comments and our response.) The Council on Environmental Quality noted that our report provides a useful overview of the activities to date by the United States and other developed countries and agreed with our recommendation. The Council provided additional information to add context to our report. (See app. V for the Council’s comments and our response.) The Department of Energy provided editorial comments on our report, which we incorporated as appropriate. The Environmental Protection Agency had no comments on our report. We conducted our audit work from September 1995 through July 1996 in accordance with generally accepted government auditing standards. A detailed discussion of our objectives, scope, and methodology is contained in appendix VI. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days from the date of this letter. At that time, we will send copies to the Secretary of State; the Secretary of Energy; the Administrator, Environmental Protection Agency; the Director, Council on Environmental Quality; the Director, Office of Management and Budget; and other interested parties. We will also make copies available upon request. Please call me at (202) 512-6111 if you or your staff have any questions. Major contributors to this report are listed in appendix VII. The United Nations Framework Convention on Climate Change entered into force on March 21, 1994. As of June 1996, 159 countries had ratified the Convention. The Convention’s ultimate objective is the “stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous interference with the climate system from human activities.” To achieve this goal, the Convention established different types of goals and commitments for developed and developing countries. Under the Convention, all parties are to do the following: Prepare and communicate to the Conference of the Parties inventories of greenhouse gas emissions caused by human activity using comparable methodologies. Develop and communicate to the Conference of the Parties programs to mitigate the effects of greenhouse gases and measures the countries might take to adapt to climate change. Cooperate in the transfer of technology addressing greenhouse gas emissions in all relevant sectors of the economy. Promote sustainable management of greenhouse gas sinks and reservoirs. Cooperate in preparing for adaptation to the impacts of climate change. Integrate considerations of climate change with other policies. Conduct research to reduce the uncertainties about scientific knowledge of climate change, the effects of the phenomenon, and the effectiveness of responses to it. Exchange information on matters such as technology and the economic consequences of actions covered by the Convention. In addition to the above commitments, the Convention required developed countries and other parties included in Annex I of the Convention to do the following: Adopt national policies and take corresponding measures to mitigate climate change with the aim of returning human-induced emissions of greenhouse gases to 1990 levels by the year 2000 and by protecting and enhancing greenhouse gas sinks and reservoirs. Communicate, within 6 months of the Convention’s entry into force and periodically thereafter, detailed information on policies and measures to limit greenhouse gas emissions, as well as on the resulting projections of greenhouse gas emissions and removals by sinks. Coordinate as appropriate with other parties the relevant economic and administrative instruments developed to achieve the objective of the Convention. Identify and periodically review policies and practices that encourage activities that lead to greater levels of human-induced emissions of greenhouse gases than would otherwise occur. The 36 Annex I countries are listed below. Of the Annex I countries, Belarus, Ukraine, and Turkey have not ratified the Convention. The European Economic Community—now known as the European Union—was also included as an Annex I party to the Convention. The countries listed in bold are those undergoing a transition to a market economy. The six countries we reviewed—Canada, Germany, Italy, Japan, the United Kingdom, and the United States—established various goals and employed varying approaches to attempt to meet their commitments under the Convention. This appendix describes the goals and plans to meet the goals for each of the six countries we reviewed. In 1990, Canada adopted a national goal to stabilize net emissions of all greenhouse gases by 2000 relative to 1990 emissions. Canada released its National Report on Climate Change to meet the goal. Canada’s approach relies primarily on a set of voluntary measures aimed at increasing energy efficiency and conservation and encouraging a switch to less carbon dioxide-intensive energy sources. Because of Canada’s high energy-intensity, most of its greenhouse gas emissions are generated by the demand for energy to heat and light homes and operate industries, as well as for other uses. Carbon dioxide emissions, generated chiefly from energy production and consumption, accounted for the majority of the 1990 actual emissions. Canada has acknowledged that it will miss its national goal if additional actions are not taken. It is not yet known how any additional initiatives will affect Canada’s progress toward the climate change goal. Germany established an ambitious goal of reducing its emissions of carbon dioxide by 25 percent to 30 percent and its emissions of other greenhouse gases by 50 percent in 2005 relative to 1987 emissions levels. Germany has sought to achieve the goals by implementing a broad range of over 100 measures primarily aimed at reducing carbon dioxide emissions. Thus far, carbon dioxide emissions in Germany have decreased by about 16 percent from 1987 to 1994, primarily because of depressed economic conditions in the former East Germany. In addition to those reductions, several German industry associations have agreed to voluntarily decrease carbon dioxide emissions by up to 20 percent relative to 1990 levels in order to help Germany meet its own ambitious goal. However, recent reports suggest that Germany will not be able to meet its own ambitious goal, although it will most likely meet the Convention’s goal by reducing greenhouse gas emissions below 1990 levels by 2000. The Italian government has noted that its national plan was the outgrowth of policies adopted for the Convention, but also designed to comply with prior decisions by the European Union to stabilize greenhouse gas emissions. The plan cites several initiatives already under way in the energy and transportation sectors but notes that an annual increase of between 0.4 percent and 0.9 percent in carbon dioxide emissions from energy consumption would still result. The plan also discusses possible additional initiatives that would help stabilize greenhouse gas emissions. These initiatives are primarily aimed at electricity generation, industrial production, the residential sector, and transport. Budgetary constraints and other factors, however, may impede the implementation of such measures. A recent estimate by the Italian Environment Ministry is that carbon dioxide emissions will increase by about 3 percent between 1990 and 2000. An official in that ministry stated that the government is still confident that it can meet the Convention’s goal by enacting additional measures. Japan has established a goal of stabilizing its per capita emissions and total emissions of carbon dioxide at 1990 levels by 2000. To achieve the carbon dioxide target, in October 1990 Japan established an Action Program to Arrest Global Warming. In addition, Japan has pledged to undertake efforts to stabilize methane, nitrous oxide, and other greenhouse gas emissions, but has not specified a reference year. Japan estimates that it will not reach its goal of reducing total carbon dioxide emissions, if additional measures are not taken. Japan sought to reduce its emissions by building several nuclear power plants to help phase out the use of coal, but it has encountered difficulties in siting and building the plants. The United Kingdom has adopted the Convention’s goal of stabilizing emissions of all greenhouse gases at 1990 levels in the year 2000. The United Kingdom establishes its strategy for meeting the goal in a January 1994 report, Climate Change, The UK Programme. The program relies essentially on a set of measures to reduce carbon dioxide emissions by improving energy efficiency. The United Kingdom also has adopted an 8-percent value added tax on residential fuel. The United Kingdom’s program aims to return carbon dioxide emissions to 1990 levels by reducing emissions by 6 percent. The program also aims to reduce emissions of methane around 10 percent below 1990 levels, nitrous oxide by 75 percent, and emissions of other greenhouse gases from 25 percent to 90 percent. A United Kingdom official said that the country estimates it will meet its national target. In response to its commitment to the Climate Convention, the United States issued the Climate Change Action Plan (CCAP) in October 1993. The plan includes 44 initiatives designed to return net emissions of the major greenhouse gases—carbon dioxide, methane, nitrous oxides, and hydrofluorocarbons—to 1990 levels by 2000. The plan relies primarily on voluntary programs to reduce greenhouse gas emissions and enhance the capacity of greenhouse gas sinks to store carbon dioxide removed from the atmosphere. The U.S. plan aims to cut the net projected growth of 7 percent in the major greenhouse gas emissions between 1990 and 2000 in order to return emissions to 1990 levels by 2000. The United States estimates that it will likely fall short of its target without additional measures. Currently, the Council on Environmental Quality, the Department of Energy, the Department of State, and the Environmental Protection Agency are updating the plan by developing additional ways to achieve the Convention’s goal. The new CCAP is scheduled to be issued in the fall of 1996. Changes in key economic factors and energy prices have made it more difficult for the United States to meet the goal of reducing greenhouse gas emissions to 1990 levels by 2000. Table III.1 shows changes in key growth factors between the 1993 Climate Change Action Plan (CCAP) and the Energy Information Administration’s Annual Energy Outlook (AEO) 1996. Table III.2 compares projected fuel prices in 2000 in these two documents. CCAP (percent) AEO 1996 (percent) CCAP (in 1991 dollars) CCAP (converted to 1994 dollars) AEO 1996 (in 1994 dollars) World oil price (dollars per barrel) Wellhead natural gas (dollars per thousand cubic feet) Minemouth coal (dollars per ton) The following are GAO’s comments on the Department of State’s letter dated July 31, 1996. 1. We have revised our report to reflect the recent adoption of revised reporting guidelines for national plans to be submitted in conjunction with the Convention’s current goal and the potential improvement they provide. 2. We continue to believe that a significant portion of the emissions data from national plans submitted thus far are incomplete, unreliable, or inconsistent. Therefore, as noted in the report, these data limit an assessment of countries’ progress against the Convention’s goal. We agree that estimates provided by other groups, such as the International Energy Agency, also provide some basis for determining progress, especially given that many Annex I countries will probably not come close to reaching the Convention’s current goal. However, these other estimates are limited to carbon dioxide. Additionally, it is unclear how emissions data from these other groups will be considered by the Conference of the Parties in assessing progress against the current goal or any future binding targets. 3. We revised our report to note the formulation and adoption of improved guidelines from the Conference of the Parties in July 1996. We also noted that the original guidelines were adopted in April 1995, after the submission of many of the national plans. 4. We do not state in our report that progress will be assessed solely on the basis of the national plans but rather that the ability to assess countries’ progress depends greatly on complete, reliable, and consistent data. We believe national plans will be a key component of that assessment and therefore improving the data in the plans is important. Additionally, as noted in comment 2, estimates from other groups apply only to carbon dioxide, and it is unclear how such estimates would be factored into assessing progress by the Conference of the Parties. 5. We revised our recommendation to note that reporting guidance could help enhance the completeness, reliability, and consistency of the reported emissions data rather than solve all the data problems. Also, despite broad agreement on methodologies for calculation of historical emissions, high levels of uncertainty still exist on reported emissions data other than carbon dioxide. The following are GAO’s comments on the Council on Environmental Quality’s letter dated July 31, 1996. 1. The Council on Environmental Quality notes that it is not surprising that differences exist in details reported by the countries, particularly for gases that constitute only a small fraction of greenhouse gases. However, we found that some of the problems with reported greenhouse gas emissions data, such as adjustments to 1990 emissions, also applied to carbon dioxide, the greenhouse gas reported to be the largest contributor to global warming. Additionally, emissions of greenhouse gases other than carbon dioxide—for which reported emissions data were incomplete in some cases and for which the reliability of the data was uncertain—constitute a significant enough portion of estimated total greenhouse gases to influence whether or not countries can meet the Convention’s current goal or future binding targets. For example, these gases have been estimated to account for about 15 percent of the total U.S. greenhouse gas emissions in 1990 and that percentage is higher in many Annex I countries. 2. We have revised the report to note this recent development. The Ranking Minority Member of the House Committee on Commerce asked us to review the efforts of the United States and other Annex I countries toward returning greenhouse gas emissions to 1990 levels by 2000 as agreed under the 1992 United Nations Framework Convention on Climate Change. In addition, the requester asked that we determine the major factors that may impede the countries’ progress in achieving the goal. We conducted our work from September 1995 through July 1996 in accordance with generally accepted government auditing standards. To determine the progress that the United States and other Annex I countries have made in reducing greenhouse gas emissions to 1990 levels, we obtained data on each country’s greenhouse gas emissions for 1990 and projections for 2000—from the United Nations Secretariat on the Climate Change Convention and from other groups such as the Energy Information Administration and the International Energy Agency. We also reviewed other reports prepared by the Convention Secretariat that assessed the adequacy of the Convention’s reporting guidelines and the national plans. We also discussed reporting issues with State Department and Convention officials. To determine the major factors that affect the countries’ progress toward achieving the emissions target, we concentrated our efforts on Canada, Germany, Italy, Japan, the United Kingdom and the United States. We chose those six countries because they have been the largest emitters of carbon dioxide for developed countries. We obtained and reviewed the national plans of the six countries and spoke with representatives of each country to determine the major factors affecting their ability to reach the Convention’s goal. We also discussed these factors with climate change experts and reviewed relevant reports from the Organization for Economic Cooperation and Development, the International Energy Agency, the Energy Information Administration, the Global Climate Coalition, and the United States Climate Action Network. William F. McGee, Assistant Director Robert D. Wurster, Senior Evaluator Mary A. Crenshaw, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO evaluated the United States and other countries' progress in reducing greenhouse gas emissions to 1990 levels by year 2000. GAO found that: (1) incomplete, unreliable, and inconsistent data prevent a complete assessment of these countries' efforts to limit greenhouse gas emissions to 1990 levels by 2000; (2) the United Nations Framework Convention on Climate Change has compiled data emissions from 29 countries since February 1996; (3) all 29 countries reported 1990 data on carbon dioxide, 28 countries reported similar data for methane and nitrous oxide, and 8 countries did not provide projections to 2000 for at least one of the gases; (4) the level of uncertainty in emissions data is high since some countries adjusted their 1990 inventory levels to develop more reasonable projections for year 2000; (5) the Convention's reporting guidelines do not specify whether emissions' projections should be reported as gross emissions or net emissions; (6) this lack of detail affects the completeness and comparability of emissions inventories; (7) Germany and the United Kingdom are the only major developed countries that are likely to return to 1990 emissions levels by 2000; (8) energy use is the major factor affecting Annex I countries' ability to meet 1990 greenhouse levels by 2000; (9) efforts to reduce greenhouse gas emissions in the United States are hampered by changes in key economic variables; and (10) the adoption of revised reporting guidelines will help to ensure that complete and reliable emissions data are reported.
Section 501 of the Internal Revenue Code (IRC) provides for tax exemption for certain corporations, trusts, and other organizations. Section 501(c) establishes 29 categories of tax-exempt organizations, ranging from cemeteries to professional football leagues—see appendix II for complete list of different types of tax-exempt-organizations, as well as more detailed information on the tax treatment for these organizations. The largest number of such organizations falls under section 501(c)(3), which recognizes charitable organizations. Generally, while charitable organizations pay no income taxes on contributions received, these entities can be taxed on income generated from business activities that are unrelated to their charitable purposes. These charitable organizations and related parties may be subject to several additional IRS penalties and fines for certain actions, such as not filing a required tax return. Generally, taxpayers who itemize their deductions may deduct the amount of any contribution to charitable organizations from their taxable income. The IRC requires that an organization adhere to certain accepted charitable, religious, educational, scientific, or literary purposes to qualify for 501(c)(3) tax-exempt status. The IRC also prohibits charitable organizations from undertaking certain activities: no earnings of the organization may benefit individual or private shareholders, no substantial attempt may be made to spread propaganda or influence legislation, and no effort may be made to campaign for or against a candidate for public office. In order to receive 501(c)(3) tax-exempt status, an organization must submit a Form 1023 or Form 1023-EZ and organizing documents to IRS to describe its charitable purpose and financial data. The organizing documents can include articles of incorporation or by-laws governing the activities and charitable purpose of the entity. Submitting an application is one of the first interactions a tax-exempt organization will have with IRS. The organization will then receive a determination letter from IRS which informs the organization whether its application for tax-exempt status has been approved or denied. Alternatively, the organization may be told the IRS is in need of additional information in order to make a determination. If an organization is successful in receiving tax-exempt status, it will continue to interact with IRS on an annual basis by submitting an information return or an electronic notice. The form contains information on the organization’s mission, programs, finances and governance structure which helps IRS to determine whether the organization is meeting its charitable purpose and therefore is eligible for tax-exempt status. Certain organizations that have gross receipts of $50,000 per year or less may submit the Form 990-N—an abbreviated electronic version of the Form 990 return, often called the e-Postcard. The Form 990-EZ is less abbreviated, though still more streamlined than the Form 990, and may be used by certain entities with gross receipts under $200,000 and assets under $500,000. Private foundations complete the 990-PF. IRS oversees charitable organizations through its Exempt Organizations (EO) Business Division (a part of IRS’s Tax Exempt and Government Entities Division). EO’s oversight relies primarily on two sets of activities: a front-end review of applications and a back-end review of a relatively small number of information and tax returns. At the front end, the EO Rulings and Agreements office reviews all applications for tax-exempt status. At the back end, EO Examinations analyzes the operations and finances of a small percentage of exempt organizations through examinations (audits). Exam agents propose tax assessments or changes to exempt status when necessary, as well as advise organizations about how to comply with the law in the future. Through these activities, IRS tries to ensure that charities merit the recognition of a tax-exempt status as well as the retention of it. IRS and state charity regulators both play a key role providing oversight of charitable organizations. IRS’s oversight interests are in ensuring that tax-exempt organizations comply with federal tax law. State charity regulators have a broader oversight interest which includes the application of state trust, non-profit corporation, consumer protection, and charitable solicitation laws. Although these federal and state regulators have distinct oversight interests, these interests are closely related and at times, overlapping. Both IRS and state charity regulators seek to prevent excess compensation, private inurement, conflicts of interest, and other abusive practices by charitable organizations. IRS also collaborates with U.S. Attorneys at the DOJ to identify and prosecute criminal tax violations. Although EO does not collaborate directly with DOJ, criminal investigations may be initiated within IRS at the recommendation of an IRS revenue agent who has detected potential criminal activity. Investigations may also be initiated at the advice of a U.S. Attorney’s office, a law enforcement agency, or the public. IRS criminal investigations are led by an IRS special agent. If substantial evidence is found for a criminal case, then—after multiple rounds of supervisory review—the investigation may be referred to the DOJ Tax Division or the relevant U.S. Attorney’s office. If DOJ or the U.S. Attorney accepts the investigation for prosecution, the case is then handled by the prosecutors. Charitable organizations comprise a significant part of our economy in terms of their share of the gross domestic product (GDP) and their importance in services vital to the well being of citizens. Available estimates indicate that tax-exempt organizations serving households generate about 5 percent of U.S. GDP. The federal government increasingly relies on these organizations to deliver critical services: in 2012, government agencies paid an estimated $137 billion to nonprofit organizations in grants and contracts for services. Also, for fiscal year 2013, the Department of the Treasury (Treasury) estimated the tax expenditure for deductions for contributions to charitable organizations totaled over $48 billion. As shown in figure 1, the total population of nonprofit organizations can be broken down by tax-exempt status and by the requirement to file returns. In 2012, an estimated 2.3 million nonprofit organizations operated in the United States, including organizations that have not applied for tax-exempt status with IRS. Of nonprofits in 2011, 1.63 million had been recognized as tax-exempt by IRS and 1.08 million of them were 501(c)(3) charitable organizations. An estimated 274,000 of these charitable organizations filed returns—around 189,000 filed Form 990 and 85,000 filed Form 990-EZ. As shown in figure 2, the number of charitable organizations that were recognized as tax-exempt by IRS increased from more than 960,000 in 2003 to more than 1.28 million in 2010 (a 33 percent increase) but declined to 1.05 million in 2013 (a 18 percent decrease from 2010). After 2010, this decline was primarily due to the Pension Protection Act of 2006 (PPA), which mandated that any organization—large or small—that failed to file a required return or notice for three consecutive years would automatically lose its federal tax exemption. Since PPA’s passage, more than 570,000 organizations have lost their tax-exempt status through the automatic revocation process. The net effect on the number of charitable organizations (when new applicants and reinstated revocations are included) is an average decline of about 200,000 from the peak in 2010. Charitable organizations, excluding private foundations, represent a diverse array of mission areas and range greatly in asset size, the amount of revenue they raise, and their expenses. As shown in figure 3, the largest number of charitable organizations filing Forms 990 and 990- EZ were in the human services sector. However, the health and education sectors had the largest amount of assets. The human services sector represents 38 percent of the total population of charitable organizations that file Form 990 or 990-EZ returns, followed by the education sector at 18 percent and the health sector at 13 percent. For more information by mission category, see appendix III. All three sectors cover a diverse range of programs and serve different segments of the population. The human services sector includes activities related to employment, housing and shelter, and youth development. Education includes elementary, secondary, vocational, and technical schools and universities. Health includes hospitals, mental health and crisis intervention, and medical research. The assets of charitable organizations that file Forms 990 or 990-EZ returns are concentrated in the health and education sectors, which held about 75 percent of the total assets—more than $2 trillion—while the human services sector held almost $324 billion (or about 11 percent) of the total. In addition to being concentrated in a few sectors, a large proportion of total assets were controlled by a relatively small number of charitable organizations. Of the charitable organizations that file a Form 990 or Form 990-EZ, less than 3 percent held more than 80 percent of the assets, as shown in figure 4. These organizations were primarily from the health and education sectors. Over 59 percent of charitable organizations that file Forms 990 or 990-EZ returns (with less than $500,000 in assets) held less than 1 percent of all assets. Most of the expenses of charitable organizations that filed Form 990 were for program services, which are mainly activities that further the organization’s tax-exempt purposes. Total expenses of charitable organizations are broken down on the Form 990 into program service expenses, management and general expenses, and fundraising expenses. About eighty-seven percent of Form 990 charitable organization expenses in 2011 were for program services, about 12 percent for general management, and about 1 percent for fundraising. As we noted in our 2002 report, charitable organizations have discretion in determining how to charge and allocate expenses for program services, general management, and fundraising. The differences in methods can result in charities with similar activities allocating expenses differently among expense categories. This complicates interpretation of these data. For more information on expenses, see appendix III. Through its examinations, IRS can analyze the operations and finances of tax-exempt organizations and propose tax assessments or changes to exempt status when necessary. In general, IRS attempts to select entities that it believes are likely to have violated requirements, such as unauthorized use of an organization’s assets, or engaging in political activity. On the basis of these examinations, IRS can accept the Form 990 as filed or can change the status of the entity, impose excise taxes for certain types of violations, or revoke the tax-exempt status if the violations are serious enough. It can also assess taxes if an entity has not fully paid employment taxes or taxes on unrelated business income. IRS can also advise organizations on complying with the law in the future. This can include sending written advisories to organizations advising them of specific issues that need to be addressed. IRS receives indications of noncompliance from a variety of sources that can lead to examinations, as shown in figure 5. EO initiated 8,413 exams of tax-exempt organization returns in 2013 and 4,495 of these (or about 53 percent) were exams of charitable organizations. As shown in table 2, the largest source for these exams (41 percent) was the category that includes the IRS National Research Program project. EO participated in an IRS National Research Program project on employment taxes in 2013, which contributed to an unusually high number of exams during that year. The next largest source (22 percent) was Form 990 data analytics. In 2008, IRS redesigned the Form 990 for the purposes of promoting compliance and increasing transparency. The redesigned form requires filing organizations to supply more in-depth information than previous versions. For example, the form includes new questions on governance, compensation, activities, relationships with related organizations, international activities, fundraising, non-cash contributions, and other compliance areas. A team of EO specialists developed data-mining queries (based on the redesigned form) to identify suspected inaccuracies or anomalies. For example, with the new Form 990, EO can search for whether an organization reports that it has a mortgage and receives rental income—suggesting that it has unrelated business income. If the organization does not file a Form 990-T, Exempt Organization Business Income Tax Return to report any unrelated business income, IRS is more likely to select the organization for examination. As of April 2014, EO had developed a list of over 150 condition codes based on return line entries to identify potential noncompliance issues. Of the charitable organization examinations initiated in fiscal year 2013, 632 (or 14 percent) were the result of referrals (including news items)— communication EO receives from internal and external sources alleging potential noncompliance with the tax law. EO managers told us referrals are prioritized so that those involving a serious breach of public trust or abuse—such as financial investigations or allegations of terrorism—are to be examined right away. On the other hand, high profile referrals— referrals resulting from a media exposé or involving a well-known organization—are not necessarily high priority, and may not be examined right away. The two most common sources of referrals for all tax-exempt organizations in 2013 were the general public, with about 81 percent of the 6,940 total referrals, and other IRS functional areas, with about 12 percent of the total. The specific potential violations most commonly alleged were that income or assets were being used for private benefit, the organization was involved in a political campaign; or the organization failed to report employment, income, or excise tax liability properly. With IRS budget cuts, the number of EO FTEs has declined over the past several years, leading to a steady decrease in the number of organizations examined. The total number of FTEs in the EO division decreased from 889 to 842 (about 5 percent) and the number of FTEs doing examinations declined from 529 to 493 (about 7 percent) between fiscal years 2010 and 2013, as shown in table 3. IRS examines only a small percentage of charitable organizations that file returns, including private foundations, as shown in table 4. EO examination rates were lower, relative to other IRS divisions. For charitable organizations, the examination rate was about 0.7 percent in 2013, while for individual and corporate tax returns it was 1 percent and 1.4 percent, respectively. A comparison of tables 3 and 4 also shows that the number of employees performing exams has declined while the number of returns filed has increased. From fiscal year 2011 to 2013, the exam rate decreased from .81 percent to .71 percent (by about 12 percent). As shown in table 5, examinations may result in no change to the amount of taxes owed or the tax-exempt status, the assessment of taxes or penalties, or the revocation of tax-exempt status. The no-change rate over the past three years has been between 30 and 34 percent. According to EO officials, IRS uses change and no-change rates as one indicator of how well it is targeting exams. Therefore higher no-change rates indicate that IRS is spending resources examining compliant entities. The 30 percent no-change rate for charitable organizations is relatively high when compared to the rate for some other filers. For example, in 2013, the no-change rate for all examinations of individual tax returns ranged from 9 to 12 percent, depending on the type of exam that was conducted. An organization may be subject to a tax assessment or penalty charges for filing a late return, for failure to provide required document, or for late payment of taxes. IRS can also revoke tax-exempt status when charitable organizations (or individuals responsible for the organization) violate certain rules. The organization would have to reapply as a tax-exempt organization and start the process over. Revocations may result when an organization is found to be engaging in non-exempt activities, operating in a commercial manner, or allowing inurement of net earnings or assets of the organization to benefit an officer, director, or a key employee who has a personal or private interest in the activities of the organization. This type of enforcement action happens infrequently. An exam may also identify issues—such as a proposed expansion of an unrelated business income producing activity, or a failure to properly report required information, such as special fundraising activities or officer compensation. If enlarged, these issues could jeopardize the tax-exempt status of the organization. However, there is no change in the tax-exempt status of the organization. Instead, the examiner will issue a closing letter with a written advisory addendum to the organization identifying the non- compliance issues found during the examination which, if corrected, would bring the organization into compliance. As figure 6 shows, the number of charitable organization returns with revocations declined from 2011 to 2013. Generally, the most often-cited reason for the revocation was that the charitable organization was not operating for a charitable purpose. The exception was in 2011, where the failure to file returns and other records was slightly higher. Operating for the benefit of private interests was another reason for a revocation. The EO Business Division has faced challenges over the past several years from declining budgets and staffing and from the complexity and the sensitivity of their workload, which includes regulating the political activity of tax-exempt organizations. EO’s ability to address these challenges has been hindered by a lack of performance measures that EO management could use to fully assess and communicate how effective their efforts have been. EO managers have introduced some timeliness measures that may lead to efficiency gains by helping them track their inventory of applications, referrals, and exam cases and identify bottlenecks as they occur. However, EO managers lack compliance goals and measures for assessing their progress in ensuring that charitable organizations are conforming to their charitable purpose and other aspects of the tax law. EO is developing a new approach to identify and select organizations for examination. EO also introduced a streamlined application form for smaller organizations seeking tax-exempt status that may reduce administrative burden. However, the limited amount of information that will be available about these organizations raises questions about how IRS will identify noncompliance issues for this particular segment of charitable organizations. In addition, a lack of timely, accurate digitized data from the Form 990 further complicates oversight efforts. EO has developed measures and goals to track its output of application reviews and examinations, but these do not measure impact on compliance. EO managers develop an annual work plan, which lists the research projects and exams they plan to initiate over the course of the year. The work plan includes goals for the number of cases they plan to start and close within the year, and the estimated number of staff days for each project. EO management uses a quarterly dashboard to track their progress against the workload goals, as set forth in their annual work plan. However, these performance goals and measures do not show the impact of EO efforts on compliance. These are measures that are useful for monitoring processes within EO, but they do not measure outcomes— such as improvements in the compliance rate for the charitable organization sector. Developing outcome-oriented goals and establishing measures to assess the actual results of EO activities (compared to their intended purpose) can help EO improve performance and determine whether programs have produced desired results. EO Examinations managers use the examination change rate as a measure of compliance. The change rate is measured as the percentage of exams that result in a written advisory, a change in the amount of taxes, penalties, or fines an organization owes, or a change in the organization’s tax-exempt status. This change rate that EO reports from exams may be a useful indicator of how accurately EO is selecting charitable organizations for exam but it has limitations as a compliance measure. EO does not use a random sample of organizations to estimate the change rate for the charitable sector as a whole and—except in the case of certain projects described below—it does not estimate the rate by type of charitable organization or by compliance issue. Change rates based on nonrandom exam selection procedures cannot be used to enhance compliance by focusing enforcement efforts on areas with the highest risk of noncompliance. In addition, EO’s change rate is ambiguous as a measure of compliance because while it includes the number of written advisories that EO managers send following an exam, EO managers do not track the percentage of organizations that received a written advisory in the past and continued to be noncompliant. In addition to lacking some key compliance measures, EO managers do not set specific, measurable goals related to improving the compliance levels of tax-exempt organizations, either as a whole or for charitable organizations in particular. One reason it is difficult to set compliance goals, according to IRS and EO officials, is that IRS has not completed a study for charitable organizations that establishes a compliance baseline. In order to track progress toward a compliance goal, generally-accepted evaluation standards require establishing a baseline for compliance measures and recording how these measures change over time. A possible source for such a baseline is IRS’s National Research Program (NRP) but the cost of establishing a baseline may be large. The NRP examines a sample of tax returns to estimate rates of noncompliance for all taxpayers and for different types of taxpayers and tax issues. According to IRS and Exempt Organizations officials, it would be useful to have a research program like the NRP identify compliance baselines for EO. More compliance data on the EO sector would allow EO examiners to better select organizations for exam and would potentially reduce the no-change rate. However, these officials also point out that such a project requires a considerable investment of resources. They also say undertaking such a large-scale long-term project may not be feasible, given the resource constraints faced by IRS. Another possible source of baseline data is the projects initiated by the EO Examinations Unit. EO Examinations sponsors new research projects each year to help identify non-compliance (which can include estimating a change rate) by types of operations, by segments of the charitable sector, or around a particular issue or item from the Form 990. A project can begin when EO decides that potential for noncompliance exists for a certain issue based on data analytic queries, referrals, or some other source (as described above). A random sample of organizations in that segment is selected for review. EO managers said they generally judge a project to be successful if it achieves a change rate of 80 percent or higher. For example, if 80 percent or more of the returns examined as part of the review result in a change in the amount of taxes, penalties, or fines the organization owes, a change in the organization’s tax-exempt status, or a significant change in the organization’s operations, EO managers may then decide to incorporate the original query as part of their regular examination selection criteria. However, EO does not use the results of the projects as baselines to assess whether the strategies adopted as a result of the projects have improved compliance among the segments of the tax-exempt sector that were studied in the projects. EO managers see these projects as a way to gather information they can use to develop strategies to address emerging issues, measure overall levels of compliance, and address areas of known non-compliance. But the impact on compliance remains unclear, because EO does not consistently follow up to measure the project impact in subsequent years or policy changes made as a result of the project. Even when a full research project using a representative sample is not justified given EO’s resource limitations, other approaches may still provide useful information about compliance. For example, tracking the frequency in exams of issues that were identified in the project, while not an adequate compliance measure due to the nonrandom exam selection procedures, could provide insight into whether the issues are still a reason for concern. An example of how establishing a baseline could have helped EO measure the effect on compliance of policy changes is EO’s project that reviewed charitable organizations providing consumer credit counseling. The project examined over 200 organizations and IRS revoked, terminated, or proposed revoking the exemptions of around 60 percent of these organizations for abuses such as failure to provide education, operating as a commercial business, or serving the private interest of directors, officers, and related entities. According to the 2011 Exempt Organizations Annual Report, this project helped to stimulate legislative changes in 2006, including new tax rules governing exempt credit counseling organizations. However, IRS has not evaluated the effect of changes in the law that came about as a result of the project on subsequent compliance by charitable organizations that provide credit counseling. Because the project examined over 200 of the credit counseling organizations, including the 63 largest, it may have had sufficient coverage of this part of the charitable sector to provide a baseline for assessing the effect of the legislative changes on compliance. According to EO officials, they decided not to open a large volume of new credit counseling cases after the initial project was completed because the initial project still had cases in the appeals process and the relevant sections of the 2006 legislation had phase-in provisions under which the provision took full effect for tax years after January 1, 2011. However, EO currently has no plans to undertake such a study in the future. EO officials told us that they intend to change their source-based approach to selecting and conducting exams starting in 2015, in order to make better use of limited resources. They said they intend to rely more on data mining queries based on the redesigned Form 990 to detect high- risk areas of noncompliance and to prioritize enforcement efforts. This approach may also better conform to strategies IRS lays out in their strategic plan for 2014-2017, which calls for IRS to develop improved research-driven methods and tools to detect and combat noncompliance and improve resource allocation. As an example, EO officials also report they will prioritize case selection according to criteria that give more weight to more consequential outcomes. A data mining query generating a lot of revocations would take priority over a query that may only generate written advisories. They hope their new approach will allow them to better select cases for exam and to measure their effectiveness. However, this new approach is still in early stages of development and they are starting to implement it in FY 2015. Also, without compliance goals, related performance measures, and more complete indicators of compliance, it will be difficult to assess the effectiveness of the new strategy. Developing measures of compliance and determining the impact of projects and exams on compliance is challenging. As we’ve discussed in past reports, IRS researchers have found it difficult to determine the extent to which its enforcement actions deter noncompliance or its services improve compliance among taxpayers who want to comply. The challenges to determining impact include collecting reliable compliance data, developing reasonable assumptions about taxpayer behavior, and accounting for factors outside of IRS’s actions that can affect taxpayer compliance, such as changes in the tax law. Nevertheless, even if IRS (or in this case, EO) is unable to empirically estimate the extent to which its actions directly affected compliance rates, periodic measurements of compliance levels can indicate the extent to which compliance is improving or declining and can provide a basis for reexamining existing programs and triggering corrective actions, if necessary. Best practices indicate that establishing results-oriented goals can help agency officials demonstrate they have thought through how the activities and initiatives they are undertaking are likely to lead to meaningful results in line with programmatic goals. An example of a results-oriented goal for EO would be a specific, measurable increase in compliance rates for particular types of organizations (such as for the credit counseling organizations described above) or for certain issues that have been problematic in the past (such as failure to pay employment taxes). We have also previously reported that setting results-oriented goals, establishing performance measures and related performance indicators with targets in meeting such goals, and reporting on progress against those goals are the hallmarks of effective management. The absence of results-oriented goals and related performance measures for compliance does not allow EO officials to compare the success of different initiatives against one another, determine whether their compliance strategy is working as intended, or allocate resources to those activities that have the most impact on compliance levels. In May 2013, Treasury Inspector General for Tax Administration (TIGTA) found that applicants for tax-exempt status experienced significant delays in the review of their applications. Specifically, it reported that as of December 2012, many organizations had not received an approval or denial letter—more than two years after they submitted their applications. TIGTA also reported that the EO Rulings & Agreements function did not have specific timeliness goals for processing applications, such as potential political cases, that require significant follow-up with the organizations. The Taxpayer Advocate also reviewed EO operations and concluded that EO management did not have meaningful performance measures required for effective management oversight of the application process such as how long it takes, on average, to process applications that cannot be disposed of during initial screening and what percentage of inventory had not been reviewed in nine months or more. The lack of meaningful performance measures compounded other challenges faced by EO management. As discussed earlier, the number of charitable organizations recognized as tax-exempt by IRS increased over the past decade, as the number of employees has declined along with budget cuts. In addition, the auto-revocation process that began in 2011 (following changes to legislation in 2006) inadvertently led to an increase in the number of applications. By revoking the status of organizations that had not filed in three years, the process revoked the status of hundreds of thousands of organizations that no longer existed as intended, but also purged thousands of organizations that still merited tax-exempt status, but may have been unaware of their filing requirements. Once their status was automatically revoked, many applied for reinstatement, leading to a spike in applications. According to IRS senior officials, other challenges faced by EO management included outdated information technology records management systems, making it difficult for managers to understand and manage the size of the application backlog. EO managers introduced performance measures and goals to help address these challenges and the concerns about applications inventory raised by TIGTA and the Taxpayer Advocate. For fiscal years 2013 and 2014, EO managers set a goal to eliminate the backlog of applications waiting for review for 270 days or more. To assess their progress in managing the inventory of applications, EO managers focused on performance measures such as number of applications received and closed, average age of applications inventory, and number of cases still pending after 270 days. They anticipate that these measures will allow them (for the first time) to track their progress, to more readily identify choke points and the reasons why slowdowns are occurring, and to react accordingly. Managers can now access this information through a weekly dashboard to monitor the inventory and communicate to senior EO management on a quarterly basis. As of September 2014, IRS Exempt Organizations officials reported they had closed over 117,000 cases in FY 2014, an increase of 121 percent over the prior year’s closings. They also reported that the end of year inventory was 22,759, as compared to 65,718 at the end of FY 2013. EO managers also introduced a streamlined application review process to reduce their inventory of aged cases. Initially, application reviewers relied more heavily on attestation statements made under penalty of perjury, rather than with substantiating documents. For example, if an organization failed to include a narrative statement describing its activities as required by the application Form 1023, IRS would ask the organization to attest that it met the operational test for tax-exempt status, rather than hold the case open until the organization submitted the appropriate paperwork. Similarly, applicants that failed to submit organizing documents—which are important because they describe how the organization’s purpose and assets comply with tax-exempt purposes set forth in section 501(c)(3)—would need to attest that they had the appropriate organizing documents and that they met statutory and regulatory requirements, rather than provide actual documents. EO officials said attestation statements allowed applicants to indicate that they were fully aware of the application requirements, but they were not used in isolation. For example, if an application indicated the possibility of private inurnment, the reviewer was supposed to ask the applicant about this issue and would not rely solely on attestation. In the spring of 2014, EO managers determined the interim guidance for application reviewers was unclear, and they told reviewers there must be a narrative and organizing documents, although the IRM has not yet been updated to reflect this change. The new, streamlined procedures initially applied only to cases in the applications inventory that were more than a year old, but were extended in May, 2014 to all existing inventory. EO officials told us these streamlined review procedures are temporary and they are commissioning a study to evaluate their effectiveness and efficiency. In addition, in July 2014, EO managers introduced a new application form for relatively small organizations. This application adopts the same approach of substituting attestation for documentation as used initially in the streamlined inventory procedure. The new application (Form 1023- EZ) can be used by certain organizations with annual gross receipts of $50,000 or less and assets of $250,000 or less seeking tax-exempt status. This form is considerably shorter than Form 1023 (3 pages compared to 12 pages), asks fewer questions, and the questions are primarily yes/no questions or checking a box for attestation. Form 1023- EZ does not require detailed information, such as organization documents, financial statements, or explanations, descriptions, or narratives about activities, as is required on Form 1023. EO managers anticipate that the majority of new applicants will use the streamlined Form 1023-EZ. Several organizations, including the National Council of Nonprofits and National Association of State Charity Officials (NASCO) have raised concerns about the impact of the shorter application Form 1023-EZ on compliance. These concerns include decreasing the quality of information the IRS needs to make informed decisions about granting tax-exempt status, making it easier for “scam” charities to obtain tax-exempt status, and shifting IRS oversight obligations onto the public, the funding community, and state charity regulators. Likewise, the Taxpayer Advocate also raised concerns about the streamlined 1023-EZ form, including a lack of empirical data demonstrating that organizations anticipating less than $50,000 in gross annual receipts pose low risks to compliance, a failure to conduct a comprehensive evaluation of downstream consequences of the streamlined application, and a post-implementation evaluation plan that relies on the limited effect of a small number of audits to correct potential compliance problems. EO officials told us they disagreed with the Taxpayer Advocate’s findings. EO officials said that they conducted a risk assessment of their new streamlined application process. As part of the risk assessment, IRS identified several compliance-related risks, including the possibility of an increase in fraudulent applications and consequent potential loss of revenue, due to tax deductible contributions to organizations that were not eligible for exemption. To address these compliance risks, the IRS risk assessment cited the need for a more robust back-end review of newly-exempt organizations that had received their status through the streamlined process. This would include increased use of quality control checks, audits, and other reviews, and an enhanced EO Examination process to identify ineligible organizations. EO managers intend to review a sample of organizations that used the Form 1023-EZ to apply for tax exempt status to learn about the population of organizations applying for exemption using the new, shorter form, including their eligibility to use the form. This phase started in July 2014 (when the new form was introduced to the public), and occurs after an organization submits an application and before IRS makes a determination on tax-exempt status. IRS plans to review a random sample of 3 percent—an estimated 1,260—of applications submitted using the new streamlined 1023-EZ form. According to EO officials, this sample size was based on a reliability factor and was then adjusted based on staffing resource capacity. As part of determining eligibility to use the shorter form, the reviewers are supposed to request from the filers in the sample such items as a detailed description of past, present, and future activities, and revenues and expenses for the most recently completed year. The EO Examinations function will also conduct post-determination (i.e. after tax exempt status has been determined) compliance reviews of organizations that applied for and received their tax-exempt status through the streamlined application review as well as organizations that applied for and received their tax-exempt status using the shorter Form- 1023-EZ. The compliance reviews of the first group—organizations that received their tax-exempt status through the streamlined review process—will begin during FY 2015. For this review, EO plans to select a random sample of exempt organizations reviewed under the streamlined process to provide information about the subsequent compliance characteristics of organizations that received their status in this way. EO recently approved guidance for reviewing these returns, which lists potential areas of noncompliance, such as legislative or overseas activities, compensation issues, and unrelated business activity. According to Tax-Exempt and Government Entities Division and EO officials, post-determination compliance reviews of organizations that used the Form 1023-EZ to apply for tax-exempt status will begin in late FY 2015 or early FY 2016. For this review, EO plans to take a random sample of exempt organizations that filed the Form 1023-EZ to provide information about the subsequent compliance characteristics of organizations that filed the shorter form. They anticipate they will use the same review procedures for the Form 1023-EZ exams as they use for the streamlined review process. However, they also report some modifications will be necessary due to the limited information on the EZ application itself as well as the fact that eligible organizations would typically be filing the Form 990-N as opposed to the more detailed Form 990 or even Form 990-EZ. The limited amount of information that will be available about these organizations because of the shorter application form and information return raises questions about how IRS will identify noncompliance issues for this particular segment of charitable organizations. The e-filing rate for tax-exempt organizations is significantly lower than for other taxpayers and organizations. In 2013, 38 percent of Forms 990, 990-EZ, and 990-PF were filed electronically, while in 2011, 64 percent of partnerships and 66 percent of S Corporations filed electronically. The lower rate is due in large part to current law, requiring only very small and very large tax-exempt organizations to file electronically. Larger tax- exempt organizations (those that that file at least 250 returns during the calendar year) are required to file electronically and smaller organizations (those that are excused from filing Form 990 or Form 990-EZ, generally because their gross receipts are normally less than $50,000 annually), must file an annual notice (Form 990-N) in electronic format. Medium- sized organizations—those too big to file the Form 990-N, but not big enough to file 250 returns—are not required to file electronically. This lower rate means that there is less digitized data available for data mining and analytics and that IRS will have higher labor costs. IRS officials said that having more return information available electronically might improve examination selection. For instance, when IRS examination specialists suggested filters for an EO project to use in identifying potentially noncompliant issues, the lack of available electronic data prevented EO from using all of the filters. Further, IRS officials estimate that mandated electronic filing would save EO more than an estimated $1 million in labor costs over a three year period, although a complete study has not been performed. In the FY 2015 Budget proposal, the administration proposed that Congress expand e-filing for tax-exempt organizations. Expanded e- filing may result in more accurate and complete data becoming available in a timelier manner, which in turn, would allow IRS to more easily identify areas of noncompliance. In our 2014 report on partnerships and S corporations, we recommended that Congress consider expanding the mandate for partnerships and corporations to electronically file their tax returns in order to cover a greater share of filed returns. We concluded that increased e-filing would increase the amount of digitized data available to IRS, which examiners could then use to identify which partnership and S corporation tax returns could be most productive to examine. Since many charitable organizations are organized as not-for- profit corporations, the same mandate could also cover 501(c)(3) charitable organizations. Any option for an e-filing mandate would impose some burden on some tax return filers if (for example) they do not already possess the e-filing technology and they need to get access to it. However, expanded e-filing could also reduce taxpayer burden, since greater accuracy would reduce false-positives, allowing IRS to identify “bad actors” rather than organizations who made mistakes on their returns. In addition, the burden could be mitigated. The 2015 budget proposal would allow transition relief for up to three additional years after the date of enactment to begin electronic filing, for smaller organizations and organizations for which electronic filing would be an undue hardship without additional transition time. According to IRS, electronic filing also increases transparency for the tax- exempt community because more searchable data becomes publicly available faster than paper-filed returns, which must first be converted to machine readable format. Once publicly available, the Form 990 data may be used by donors to make more informed contribution decisions and by researchers, analysts, and entrepreneurs to understand the tax- exempt sector better and to create information tools and services to meet the needs of the sector. Having Form 990 series return data faster would also be useful to state and local regulators, charity watch-dog groups, charitable beneficiaries, and the press. In addition, e-filing would allow the IRS to process returns more quickly and at a lower cost than when paper returns are filed. Representatives from across the nonprofit and law enforcement community with whom we spoke support this reform as a strategy for improving transparency and accountability. To oversee charitable organizations, IRS collaborates with different federal and state entities, including DOJ and state charity regulators. IRS and DOJ officials identified no obstacles that prevent their collaboration for the enforcement of tax laws. IRS refers cases involving possible criminal matters to the DOJ Tax Division for investigation and possible prosecution. According to DOJ officials, the review and referral process is designed to maintain a separation between IRS and DOJ decision making to avoid the appearance of (or actual) abuse of executive power. As such, DOJ attorneys have little involvement with the IRS review and referral process. Once a case is referred to DOJ, prosecutors have access to the taxpayer information under IRC section 6103(h)(2). DOJ Tax Division attorneys said they are familiar with the information-sharing framework under this IRC provision. This framework allows prosecutors access to the information they need to build a case. However, state regulators and other subject matter specialists said statutory requirements for safeguarding taxpayer information and uncertainty about how these safeguards must be implemented limit state regulators’ ability to use relevant information shared by IRS. They add that this may reduce regulators’ ability to use that information to build cases against charitable organizations engaged in fraudulent or other criminal activity. Barriers to information sharing between IRS and state charity regulators have been a long-standing challenge. Before legislation passed in 2006, IRS was only permitted to disclose to state charity regulators information concerning final denials of applications for tax-exempt status, revocations of tax-exempt status, and final notices of deficiencies. In 2002 and 2005, we reported that this limited data sharing hampered state charity officials’ efforts to identify charities which are defrauding the public or otherwise operating improperly. At the time, state charity regulators told us that the lack of details impeded their efforts to track individuals who tried to re-establish similar, suspicious operations in other states. We recommended IRS propose revised legislation that would allow IRS to share more data—such as information about ongoing and closed examinations of charities—as a way to help IRS and states better use limited resources and to allow the states to more quickly respond to noncompliance. In 2006, the PPA was enacted with provisions to facilitate information- sharing between IRS and state charity regulators. The PPA expanded the type of information state charity regulators can receive to include sensitive, confidential information, such as revenue agents’ reports regarding proposed revocations and notices of deficiencies. IRS can now share information about certain proposed revocations and proposed denials before an administrative appeal is made and a final revocation or denial issued. In addition to the information IRS is now allowed to share with state charity regulators, IRS also makes revocations publicly available: IRS lists revocations in the Internal Revenue Bulletin, although the reason for revocations resulting from exam are not given or made public. While the PPA expanded the types of information IRS could share with state charity regulators, the law also placed safeguards on that information. For the first time, the PPA subjected state charity regulators to the same criminal penalty provisions of the Internal Revenue Code that all other recipients of tax information are subject to, making it a criminal offense for any state official to willfully disclose information shared by the IRS under Section 6104(c) in a manner unauthorized by the Internal Revenue Code. To have access to the increased types of information now available, state charity regulators must sign a disclosure agreement with IRS in which they agree to certain safeguarding procedures for receiving and handling taxpayer data. State charity regulators and other subject matter specialists we spoke with believe these safeguard requirements are unclear and difficult to implement. Although these requirements are the same as those that IRS and any state tax agencies receiving federal tax data must follow, most states do not have the resources, capacity, or infrastructure within their charitable oversight function to fulfill the requirements, according to subject matter experts with whom we spoke. For example, all disclosures provided by IRS must be reviewed by a state charity official, logged to record the receipt of the information, and stored behind at least two secured barriers, such as locked doors or cabinets. State charity regulators are also prohibited from entering the IRS shared data in a word processing program on a networked computer, unless lengthy security requirements are met. Since the passage of the PPA, charity regulators in three states— California, New York, and Hawaii—signed a memorandum of understanding (MOU) with IRS to share information. However, despite the MOU, these state charity regulators still report challenges in storing and receiving data from IRS. A lack of clarity surrounding how they can use the data from IRS to build their own cases, and the criminal penalties attached to improper disclosure of the data, have prevented state charity regulators from incorporating IRS data into their investigations. For example, state charity regulators said when they learn that IRS is examining a charity located in their state for violating federal tax law, they must first contact the charity and request the documents it has already turned over to IRS. However, if the charity refuses or denies it has the requested documents, the state charity regulators do not believe they can enforce their request by citing the information provided to them by IRS. As a result, they seek to independently verify the information from the IRS (to the extent they are available) through other sources (such as the internet or the state registration database) before contacting the charity. Also, according to subject matter experts, IRS interpretation of these rules has been inconsistent. As a result, regulators are unsure whether they are in violation of the safeguards requirements. According to state charity regulators, other states have not entered into information-sharing agreements with IRS because they view the safeguard requirements as overly burdensome, given their limited resources. NASCO and the National Association of Attorneys General (NAAG) representatives credit IRS with trying to educate state regulators about the PPA requirements. IRS staff has given presentations at joint NAAG and NASCO conferences. According to the ACT 2013 report on exempt organizations, the IRS’s designated EO federal/state liaison has worked to educate participating state agencies in the mechanics of PPA participation, has assisted with the safeguard procedure, and has linked state officials with appropriate IRS officials conversant in necessary information technology and security issues. The report also credited IRS with initiating discussions with state charity regulators through a task force to develop a pragmatic approach for taking advantage of what is presently available under the PPA. IRS officials are also working on a new MOU that will clarify how state charity regulators can communicate to charities to let them know they have received information about them from IRS. In addition, to facilitate information sharing between IRS and state charity regulators, Treasury officials are in the process of reviewing the PPA and Congress’s intent in drafting this legislation to determine whether additional flexibilities exist. Despite these education and outreach efforts, state charity regulators are still unclear about how they are permitted to use IRS information to identify organizations violating state law and to build cases against them. The types of information PPA makes available would bolster state oversight efforts in a variety of ways. For example, according to the ACT report, state receipt of the names of organizations applying for exempt status would help states monitor startup entities that cease operations before the IRS responds to their for 1023 applications. EO officials told us that there are efforts underway to reduce processing times that should address this particular concern. Also according to the ACT report, state receipt of information about tax-exempt organizations receiving a proposed revocation of exemption would raise immediate questions about whether those organizations’ assets are being properly applied to charitable purposes as required by state law. The challenges to information sharing between IRS and state charity regulators are related to uncertainty about what is permissible under the PPA. With limited access to IRS information, state charity regulators do not always know why a charity has had its tax-exempt status revoked, if it is under examination, or if it has been fined, but maintains its tax-exempt status. The lack of information impedes state charity regulators’ ability to identify and prosecute bad actors for violating state laws and hinders states’ ability to inform donors of scam charities. EO oversight of charitable organizations helps ensure that these entities abide by the purposes that justify their tax exemption and protects the sector from potential abuses and loss of confidence by the donor community. Over the past several years, reviewers have found that various units within the EO division could not fully assess or communicate their effectiveness because they lacked meaningful performance measures. EO managers have taken actions to address this deficiency by adding performance measures to help them track their inventory of applications, referrals, and exam cases and to ensure a level of quality assurance. EO has also developed its data analytics capacity to assist in selecting organizations for exam with greater audit potential. It has used these techniques and other information sources to select returns for examination and in some cases, has used the results of these exams to more systematically review certain tax issues. However, these actions have not addressed measuring the outcomes of EO activities (such as the effect of EO’s actions on the compliance rate for the charitable sector as a whole), for specific segments of the sector (such as universities and hospitals), or for particular aspects of noncompliance (such as personal inurement or political activity). EO does not have the compliance measures or the quantitative, results-oriented compliance goals needed to assess its effect on the compliance of charitable organizations in any of these areas. Because EO does not measure the current level of compliance, it cannot set goals for increasing compliance or know to what extent its actions are affecting compliance. The Exempt Organizations Business Division is grappling with several other challenges that complicate oversight efforts. The e-filing rate for tax- exempt organizations is significantly lower than for other taxpayers. This lower rate means that there is less digitized data available for data mining and analytics and higher labor costs for IRS. Expanded e-filing may result in more accurate and complete data becoming available in a timelier manner; in turn this would allow IRS to more easily identify areas of noncompliance. This legislative reform would also be useful to state and local regulators, charity watch-dog groups, charitable beneficiaries, and the press as a strategy for improving transparency and accountability. A lack of clarity about how state charity regulators can use IRS data to build cases against suspect charitable organizations impedes regulators’ ability to leverage IRS’s examination work. IRS and Treasury officials are reviewing the statutory protections of taxpayer data and whether there is flexibility in regard to how state regulators must protect and can use federal tax data. IRS officials are also working on a new MOU that will clarify how state charity regulators can communicate to charities about information they have received from IRS. Once completed, these actions have the potential to enable greater collaboration between IRS and state charity regulators. IRS budget and staffing levels have declined significantly over recent years. Officials and stakeholders we spoke to noted that IRS resources dedicated to EO oversight have not kept pace with growth in the sector and with the complexity of issues related to tax-exempt organizations. IRS faces difficult decisions about how to allocate resources dedicated to tax-exempt sector oversight and about what specific compliance issues to audit. IRS has already made trade-offs—such as examining fewer organizations and streamlining the application process for organizations seeking tax-exempt status—which may lead to some efficiencies, but will also result in less available information about these organizations. If IRS does not collect and use performance data to make sound decisions— especially given the likelihood of constrained budgets for the foreseeable future—the agency risks missing noncompliance, burdening tax-exempt organizations, and wasting scarce resources. Furthermore, it will be difficult for IRS to communicate agency progress to Congress and the public and thus, be held accountable. Congress should consider expanding the mandate for 501(c)(3) organizations to electronically file their tax returns to cover a greater share of filed returns. To improve oversight of charitable organizations, we recommend that the Commissioner of Internal Revenue take the following steps: 1. Direct EO to develop quantitative, results-oriented compliance goals and additional performance measures and indicators that can be used to assess impact of exams and other enforcement activities on compliance. 2. Continue to work with Treasury officials to do the following: review the flexibility afforded under PPA consistent with statutory protections of taxpayer data, clarify what flexibility state regulators have in how they protect and use federal tax data, make modifications to guidance, policies, or regulations as warranted, and clearly communicate this information with state charity regulators. We sent a draft of this report to the Commissioner of Internal Revenue and Assistant Attorney General for Administration, Department of Justice for comment. DOJ had no comments on our report. We received written comments from IRS’s Deputy Commissioner for Services and Enforcement on December 4, 2014 (for the full text of the comments, see appendix IV). In its comments, IRS concurred with our recommendations and described ongoing and planned steps to 1) improve the application process for organizations seeking tax-exempt status and reduce the backlog of applications that had accrued in recent years, 2) refine its strategy and approach to better determine the effect of enforcement actions on compliance by tax-exempt organizations, including charitable organizations, and 3) improve the efficiency with which taxpayer information may be shared with state charity regulators through education efforts and outreach. IRS also noted in its written comments that IRS’s National Research Program (NRP) may not be well-suited for the tax-exempt sector, given the diversity that exists across the sector in regard to characteristics and compliance issues. Although we made no specific recommendation that EO be part of an NRP study, we note that the NRP has helped other IRS divisions determine compliance baselines and rates for types of taxpayers, such as corporations, where considerable diversity exits. While an NRP study could be a source for baseline data, we acknowledge that because of the high cost of such a study, it may not be practical at this time. Whatever approach to measuring compliance EO adopts, it should be consistent with our recommendation that EO develop quantitative, results-oriented compliance goals and additional performance measures that can be used to assess the impact of its activities on compliance. We also received technical comments from IRS, which we incorporated into the final report where appropriate. We plan to send copies of this report to the Secretary of the Treasury, the Commissioner of Internal Revenue, the Assistant Attorney General for Administration, DOJ, and other interested parties. The report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact us at (202) 512-9110 or mctiguej@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix V. This report (1) describes what is known about the number, type, size, and other characteristics of 501(c)(3) charitable organizations; (2) describes IRS oversight activities for charitable organizations; (3) determines how IRS assesses its oversight efforts of charitable organizations to ensure they are meeting their charitable purposes; and (4) determines how IRS collaborates with state charity regulators and U.S. Attorneys to identify and prosecute organizations suspected of engaging in fraudulent or other criminal activity. To address the first objective, we reviewed Internal Revenue Service (IRS) forms and publications. We also analyzed data from the IRS’s Statistics of Income (SOI) files for Form 990, Return of Organization Exempt from Income Tax and Form 990-EZ Short Form, Return of Organization Exempt from Income Tax for tax year 2011 (the most recent year available). Although private foundations are considered charitable organizations, we did not analyze data from Form 990-PF, Return of Private Foundation or Form 990-N, e-Postcard because the data for these organizations is less complete and has other limitations. These limitations included total expenses not broken out into program service, management and general, and fundraising expenses; and private foundations not identified by mission category. In addition, our data excludes certain religious organizations, which qualify as 501(c)(3) organizations, but are not required to file a return. We also did not report data from prior years for the Form 990 and 990-EZ filers since the reporting threshold for organizations filing these returns varied from year to year, which makes year-to-year comparisons of the data difficult. These SOI samples were based on returns as filed, and did not reflect IRS audit results. Using SOI sampling weights, we estimated sampling errors for our estimates, which are reported in appendix III. Statements of difference are statistically significant. SOI is a data set widely used for research purposes. SOI data on tax- exempt organizations are available to the public on the IRS website (http://www.irs.gov). IRS performs a number of quality control steps to verify the internal consistency of SOI sample data. For example, it performs computerized tests to verify the relationships between values on the returns selected as part of the SOI sample, and manually edits data items to correct for problems, such as missing items. We conducted several reliability tests to ensure that the data excerpts we used for this report were complete and accurate. For example, we electronically tested the data and used published data as a comparison to ensure that the data set was complete. To ensure accuracy, we reviewed related documentation and electronically tested for obvious errors. We concluded that the data were sufficiently reliable for the purposes of this report. For the second objective, we reviewed IRS documentation and interviewed Exempt Organization (EO) officials on IRS oversight activities, such as the examination and revocation of charitable organizations. We also obtained data from IRS’s Return Inventory Classification System (RICS) on examinations and results for fiscal years 2011 through 2013. Based on our review of documentation and interviews, we determined that this data was reliable for the purposes of this report. We also obtained data from IRS’s Referral database on referrals received on charitable organizations and all tax-exempt organizations for fiscal years 2011 through 2013. Although we received referral data on charitable organizations, we did not use the data because we found it was not reliable for the purposes of this report. According to EO officials, reviewers were encouraged but not required to enter the organization’s subsection category (i.e. that it was a 501(c)(3) charitable organization) into the referrals data base .The EO officials said that if the referral was found to have “no issue” by the reviewer, it may not have been assigned a subcategory because the subcategory was not required and it would be viewed (in this case) as not useful. Therefore, data on the number of referrals received, the source of referrals, and the type of referral violations that we received could have been undercounted if the data had been used. Although we did not use charitable organization referral data, we did use referral data on all tax-exempt organizations, as we found that data was reliable for the purposes of this report. For the third objective, we reviewed relevant strategic and performance documents such as the annual reports and work plans, quarterly performance reports, and project summary reports. We interviewed IRS planning officials and division managers to determine the extent to which managers overseeing the tax-exempt sector set performance goals; develop measures to monitor their progress toward meeting goals; and use data to identify challenges, the cause of those challenges, and develop strategies to address those challenges. We reviewed past recommendations made by the Treasury Inspector General for Tax Administration (TIGTA) and National Taxpayer Advocate related to performance management and interviewed IRS managers about how they addressed the issues discussed in those reports. For criteria, we compared IRS information on performance measures and decision- making to Standards for Internal Control in the Federal Government and federal guidance on performance management. We also applied the criteria concerning the administration, compliance burden and transparency that characterize a good tax system, as developed in our guide for evaluating tax reform proposals. We also reviewed IRS plans to streamline the application process through the introduction of Form 1023EZ. We interviewed EO officials on how they plan to assess the impact of the new streamlined process on oversight efforts and reviewed available evaluation plans. We applied criteria from our 2012 guide on designing evaluations. To determine how IRS collaborates with state charity regulators and U.S. Attorneys, we reviewed various IRS documents, such as policy manuals, guidance, and memoranda of understanding. We also interviewed officials from IRS , U.S. Department of Justice (DOJ), and the National Association of State Charity Officials. For criteria, we identified different approaches for sharing information and collaboration based on our audit findings, as well as our past recommendations and recommendations made by the Advisory Committee on Tax Exempt and Government Entities. To provide additional context for all three objectives, we interviewed IRS officials, DOJ officials, state charity regulators, subject matter specialists, and stakeholder groups representing different types of exempt- organizations and private watchdog organizations that oversee charities about the adequacy of IRS oversight of charitable organizations; the challenges IRS faces providing effective oversight; and strategies to address those challenges. The federal tax code provides a variety of tax benefits to organizations often referred to as “tax exempt.” The exact nature of those benefits varies depending on the nature of the organization. Section 501 provides an exemption from federal income tax for the broadest range of organizations. In addition to section 501, there are various other scattered provisions which give a full or partial tax exemption to certain specific types of entities and income. This appendix focuses on organizations qualifying for a tax exemption under section 501, which will be referred to as tax exempt organizations. Within section 501, there is a division between charitable organizations, also known as 501(c)(3) organizations (after the subsection in which they are defined), and all other organizations qualifying for an exemption under section 501. The organizations that qualify for an exemption under section 501 but are not charitable organizations have been referred to as mutual benefit organizations or non-charitable nonprofits. Charitable organizations are further divided between those that are private foundations and all other charitable organizations, and private foundations are divided between operating and non-operating foundations. Each of these types of tax exempt organization—(1) mutual benefit organization, (2) charitable organization, (3) operating private foundation, and (4) non-operating private foundation—are subject to different requirements and receive different tax benefits. Tax exempt organizations of all types are prohibited from engaging in certain transactions with the creator of an organization, a person who made a substantial contribution to the organization, a member of the family of the creator or substantial contributor, or a corporation controlled by a creator or substantial contributor. Additionally, an organization operated for the primary purpose of carrying on a trade or business for profit is not exempt, on the grounds that all of its profits are payable to an exempt organization. These restrictions apply to all types of tax exempt organizations. Certain types of mutual benefit organizations and all charitable organizations are also subject to the restriction that no part of their net earnings inures to the benefit of any private shareholder or individual. Additionally, a tax is imposed on income from any trade or business unrelated to the exercise or performance of the organization’s exempt purpose. “Corporations, and any community chest, fund, or foundation, organized and operated exclusively for religious, charitable, scientific, testing for public safety, literary, or educational purposes, or to foster national or international amateur sports competition (but only if no part of its activities involve the provision of athletic facilities or equipment), or for the prevention of cruelty to children or animals, no part of the net earnings of which inures to the benefit of any private shareholder or individual, no substantial part of the activities of which is carrying on propaganda, or otherwise attempting, to influence legislation (except as otherwise provided ), and which does not participate in, or intervene in (including the publishing or distributing of statements), any political campaign on behalf of (or in opposition to) any candidate for public office.” As stated above, no part of a charitable organization’s net earnings can inure to the benefit of any private shareholder or individual. Additionally they must satisfy two tests: the organizational test and the operational test. To pass the organizational test, an entity must be organized exclusively for one or more exempt purposes, meaning that the governing instrument (such as a trust instrument, articles of incorporation, or association charter) must limit the purposes of the organization to one or more exempt purposes and does not expressly empower the organization to engage in activities not in furtherance of an exempt purpose except to an insubstantial degree. The operational test requires that organizations engage primarily in activities that accomplish one or more of the exempt purposes listed in statute. In general, new organizations seeking charitable organization status must apply for a determination of exempt status. Churches, their integrated auxiliaries, and conventions or associations of churches, as well as organizations with gross receipts of $5,000 or less and that are not private foundations do not need to apply. Organizations which meet all the requirements to be charitable organizations receive tax benefits beyond those available to mutual benefit organizations. Contributions of cash or property to charitable organizations are deductible by individuals and corporations for federal income tax purposes up to certain percentages of adjusted gross income. Such contributions are also exempt for estate and gift tax purposes. Charitable organizations are exempt from the federal unemployment tax and the federal gambling tax on lotteries. Additionally, charitable organizations are exempt from certain requirements related to establishing and maintaining retirement plans for employees. Within the category of charitable organizations are private foundations. When applying for charitable organization status, an applicant is presumed to be a private foundation unless it can demonstrate that it is not a private foundation. A charitable organization is not a private foundation if it is a church or convention or associations of churches, an educational institution, a medical or hospital care provider, medical education or research provider, or a government unit. A charitable organization is also not a private foundation if it is broadly publicly supported, as defined in section 509, or is a supporting organization of a broadly publicly supported organization. Organizations organized and operated exclusively for testing for public safety are also not private foundations. Charitable organizations not meeting one of these definitions are private foundations. Private foundations are subject to certain tax consequences which do not apply to other charitable organizations. A 2 percent excise tax is imposed on the net investment income of private foundations. A private foundation is also subject to additional taxes if it engages in self-dealing, has excessive business holdings, makes investments which jeopardize its charitable purpose, or makes certain taxable expenditures. Finally, private foundations are subject to additional reporting requirements. Additional taxes and restrictions are imposed on foundations which do not meet the definition of an operating foundation. A private foundation is an operating foundation if is uses at least 85 percent of its income directly for the active conduct of charitable activities rather than for grantmaking and meets either an asset test, endowment test, or support test. Private foundations that are not operating foundations are generally subject to a tax of 30 percent of the amount of undistributed income. The percentage limits on the deductibility of contributions to non-operating private foundations are lower than for other charitable organizations. Aside from charitable organizations, section 501 lists 28 other types of nonprofits, often referred to as mutual benefit organizations, and which include unions, civic leagues, chambers of commerce, credit unions, and veteran organizations, among many others. For a complete list of the organizations listed in section 501, including charitable organizations, see below. Qualified pension, profit-sharing, and stock bonus plans are also exempt under section 501. Unlike charitable organizations, gifts to these mutual benefit organizations are not deductible. Mutual benefit organizations are not generally exempt from the federal unemployment tax or the gambling tax, and do not have the additional flexibility in establishing employee retirement plans that is allowed charitable organizations. The tables in this statistical appendix supplement those in the letter and provide population estimates for Statistics of Income (SOI) data. After each table, notes indicate the sampling errors. We are confident the true estimates would be within these percentage points in 95 out of every 100 samples. The tables are for charitable organizations that filed Forms 990 or 990-EZ. Private foundations that file Form 990-PF, small charitable organizations that file Form 990-N, and charitable organizations that do not file returns have been excluded from this analysis. In addition to the contact named above, Kevin Daly, Assistant Director, Jeff Arkin, Sara Daleski, Jillian E. Feirson, Laurie C. King, Lawrence M. Korb, Donna L. Miller, Ed Nannenhorn, Jessica Nierenberg, Karen O’Conor, Dae Park, Amy Radovich, Cynthia Saunders, Albert Sim, Stewart Small, Andrew J. Stephens, Lindsay W. Swenson, Meredith Trauner Moles, Sonya Vartivarian, James R. White, and John Zombro made major contributions to this report.
IRS oversight of charitable organizations helps to ensure they abide by the purposes that justify their tax exemption and protects the sector from potential abuses and loss of confidence by the donor community. In recent years, reductions in IRS's budget have raised concerns about the adequacy of IRS oversight. GAO was asked to review IRS oversight of charitable organizations. In this report, GAO (1) describes the charitable organization sector, (2) describes IRS oversight activities, (3) determines how IRS assesses its oversight efforts, and (4) determines how IRS collaborates with state charity regulators and U.S. Attorneys to identify and prosecute organizations suspected of engaging in fraudulent (or other criminal) activity. GAO reviewed and analyzed IRS data, strategic planning and performance documents, and documented improvement efforts. We also interviewed IRS and Department of Justice officials, state charity regulators, and subject matter specialists. GAO compared IRS's practices to federal guidance on performance management. Charitable organizations play a major role in our economy and provide critical services and resources to families and individuals in need. Although charitable organizations vary considerably in size and purpose, in 2011 the largest number of organizations was in the human services sector, providing services such as employment and housing assistance. The highest concentration of assets was in the health and education sectors, which include hospitals and universities. In addition to being concentrated in a few sectors, a large proportion of all assets were controlled by a relatively small number of charitable organizations—less than 3 percent hold more than 80 percent of the assets. Over the past several years, as the Internal Revenue Service (IRS) budget has declined, the number of full-time equivalents (FTEs) within its Exempt Organizations (EO) division has fallen, leading to a steady decrease in the number of charitable organizations examined. In 2011, the examination rate was 0.81 percent; in 2013, it fell to 0.71 percent. This rate is lower than the exam rate for other types of taxpayers, such as individuals (1.0 percent) and corporations (1.4 percent). EO is grappling with several challenges that complicate oversight efforts. While EO has some compliance information, such as how often exams result in change of tax exempt status, it does not have quantitative measures of compliance for the charitable sector as a whole, for specific segments of the sector (such as universities and hospitals) or for particular aspects of noncompliance (such as personal inurement or political activity). Because EO does not have these measures and does not know the current level of compliance, it cannot set quantitative, results-oriented goals for increasing compliance or assess to what extent its actions are affecting compliance. Statutory requirements for safeguarding taxpayer data limit both IRS's ability to share data and state regulators' ability to use it. A lack of clarity about how state regulators are allowed to use IRS data to build cases against suspect charitable organizations further impedes regulators' ability to leverage IRS's examination work. The e-filing rate for tax-exempt organizations is significantly lower than for other taxpayers. This lower rate means there is less digitized data available for data analytics and higher labor costs for IRS. Expanded e-filing may result in more accurate and complete data becoming available in a timelier manner, which in turn, would allow IRS to more easily identify areas of noncompliance. GAO recommends IRS 1) develop compliance goals and additional performance measures that can be used to assess the impact of enforcement activities on compliance and 2) clearly communicate with state charity regulators how they are allowed to use IRS information related to examinations of charitable organizations. GAO also recommends that Congress consider expanding the mandate for 501(c)(3) organizations to electronically file their tax returns to cover a greater share of filed returns. In written comments, IRS agreed with GAO's recommendations.
Now that the Census Bureau has congressional approval to begin the full ACS, data collection will begin in November 2004. The ACS test survey of a sample of 800,000 housing units, which has been conducted since 2000, will end in December 2004. The Bureau has been using this survey, known as the ACS Supplementary Survey, to test procedures and to produce annual data for geographic areas with populations of 250,000 or more. As one part of the test program, the supplementary survey data for 2000 have been compared with corresponding data from the 2000 Census long form to evaluate the quality of the ACS data and to provide users with information to make the transition from the long-form data to the full ACS data. According to the plan the Congress approved, the first annual ACS data for geographic areas with populations larger than 65,000 will be published beginning in 2006 with data for 2005; 3-year averages for geographic areas with populations between 20,000 and 65,000 will begin in 2008; and 5-year averages for geographic areas with populations smaller than 20,000, including Census tracts and block groups, will begin in 2010. The 5-year averages for 2008–12 will replace the 2010 Decennial Census long form for small geographic areas; they will be published in 2013 and will incorporate population and housing characteristics data from the 2010 Decennial Census short form. In replacing the long form, the ACS will provide the same long-form data items at the same level of geographic area detail but in a more timely way. Whereas the long form provided small geographic detail once a decade, the ACS will provide annual estimates for large geographic areas and estimates for smaller areas in terms of 3-year or 5-year averages; the 5-year averages will provide data at the same geographic area level as the long form. According to the Census Bureau, these 5-year averages will be about as accurate as the long-form data; the annual and 3-year averages will be significantly less reliable than the long-form data but more reliable than existing annual household surveys the Census Bureau conducts. In the remainder of the Background section of this report, we briefly describe the major differences between the ACS and the Decennial Census long form. We also discuss the Census Bureau’s outreach program, designed to involve stakeholders and users in shaping the ACS. Appendix III provides additional background information on the evolution of the ACS plan, appendix IV on the ACS testing and measurement program. Appendix II describes recent NAS findings on Continuous Measurement (CM) and the ACS. The 2000 Census long form used a decennial sample of about 19 million housing units; the full ACS will use an annual sample of 3 million housing units. In order to provide reliable estimates for geographic areas with populations of 65,000 or less, monthly ACS responses will be cumulated over several years—3 years for places with populations of 20,000 to 65,000 and 5 years for places with populations smaller than 20,000. Because of the statistical properties of these averages and users’ unfamiliarity with them, the Census Bureau has long recognized the need to provide guidance on such topics as the reliability of the averages for areas with rapidly changing population and the use of multiple estimates for states and other, larger geographic areas. For the 2000 Decennial Census, the ACS test programs, and federal household surveys, including the Current Population Survey (CPS), seasonal residents are recorded in a geographic area according to a concept of usual residence. As we noted above, under this concept, people who spend their winter in Florida and the rest of the year in New Hampshire, for example, are recorded as residents of New Hampshire; college students living away from home in dormitories are recorded as residents of the college. For the full ACS, the Census Bureau has announced its decision to change the concept to current residence. According to the Census Bureau, although each concept requires that a person have only one residence at any point in time, current residence recognizes that the place of residence does not have to be the same throughout a year, allowing ACS data to more closely reflect the actual characteristics of each area. The Census Bureau plans to use current residence because the ACS is conducted every month and produces annual averages rather than point-in-time estimates, unlike the Decennial Census. Current residence is uniquely suited to the ACS, because it continuously collects information from independent monthly samples throughout all months of all years. Because the ACS is designed to produce a continuous measure of the characteristics of states, counties, and other places every year, the new residence rules were needed for seasonal and migratory individuals. The underlying population and housing characteristics data for the 2000 Census long form were for April 1, 2000. For the ACS test program, the underlying population and housing characteristics varied. For all years except 2000, they were for July 1; for 2000, they were for April 1. For the full ACS, because the data are collected monthly, the reference period will be the average for the year, and the Census Bureau will assume this average is equivalent to data for July 1. The ACS will use population characteristics (age, sex, race, and ethnicity) and housing characteristics (occupied and vacant units) derived from an independent source and not from the results collected in the survey. Using independent controls for these characteristics is standard practice to correct sample survey results for the effects of nonresponse and undercoverage. Population and housing characteristics from the 2000 Census short form were used as independent controls for the 2000 Census long form, down to the tract level. For the ACS supplementary surveys, independent controls were from ICPE, which uses Decennial Census short-form data as benchmarks and administrative record data to interpolate between and extrapolate from the census benchmarks. ICPE develops and disseminates annual estimates of the total population and the distribution by age, sex, race, and Hispanic origin for the nation, state, counties, and functioning government units. ICPE provides annual estimates of population and housing characteristics at the county level, and for some subcounty levels, as of July 1, using the usual residence concept for seasonal residents. According to current Census Bureau plans, annual estimates of dollar- denominated data items, such as income, rent, and housing-related expenses, will be presented after adjustment for inflation in order to facilitate comparisons over time. As in the ACS test programs, only annual estimates with this adjustment will be presented. The Census Bureau also has decided to continue to adjust annual data collected each month in the ACS to a calendar year basis. It will be using the Consumer Price Index (CPI) for the annual and monthly adjustments for all geographic areas. The long form and ACS will also differ in how operations are conducted, such as nonresponse follow-up and data capture. For the 2000 Census long form, nonresponse follow-up was conducted for all nonrespondents. For the ACS supplementary surveys and for the full ACS, nonresponse follow- up will be conducted for a sample of one-third of all nonrespondents. For the 2000 Census long form, all data items were entered using automated optical character recognition procedure; data from the ACS will be manually keyed. The ACS supplementary surveys excluded persons living in group quarters. Group quarters—which include nursing homes, prisons, college dormitories, military barracks, institutions for juveniles, and emergency and transitional shelters for the homeless—accounted for roughly 2.8 percent of the population in 2000. The Census Bureau decided not to cover these persons in the supplementary surveys, to avoid duplication with the 2000 Census, and because it lacked funding to cover them in subsequent years. Procedures for including in the ACS persons living in group quarters beginning with 2005 are discussed in the Census Bureau’s ACS Operations Plan, issued in March 2003. In addition, it has announced that it intends to continue testing procedures to improve the mailing list for group quarters to be used for the 2010 Decennial Census. The Census Bureau has long recognized the need to seek input from stakeholders and users in making decisions for all its programs. The Census Bureau sponsors technical reports that NAS prepares. (In appendix II, we summarize recent NAS reports on the ACS and related decennial censuses.) The Census Bureau has also held conferences on the ACS and has contracted with Westat Inc. to organize two conferences of experts on specific aspects of the ACS. Additionally, the Census Advisory Committees, which are Census Bureau–appointed advisory committees whose members represent professional associations such as the American Statistical Association (ASA) and the American Marketing Association, meet twice a year. The Census Bureau and other federal statistical agencies also participate in the quarterly meetings of the Council of Professional Associations on Federal Statistics, whose members include professional associations, businesses, research institutes, and others interested in federal statistics. To obtain input from other federal agencies, the Office of Management and Budget (OMB) established an interagency advisory committee for the ACS in 2000. The committee’s major purpose was to coordinate the review of questions to be included in the ACS. Because of the committee’s limited focus, the Census Bureau established the ACS Federal Agency Information Program in 2003, responding to a recommendation we made. This program is designed to assist each federal agency that has a current or potential use for ACS data to achieve a smooth transition to using the ACS. From its beginnings in the mid-1990s, the Census Bureau’s development plan for the ACS was designed to ensure that the ACS would satisfactorily replace the Decennial Census long form as the major source of small geographic area data. In our review of the plan, we found that the Census Bureau, as well as key ACS stakeholders, had for many years identified the key issues that needed to be resolved if the ACS were to reach this goal. We have identified the following unresolved issues from our research (described in appendix I): the methodology to be used for deriving independent controls for population and housing characteristics with ACS definitions of place of residence and reference date, improvements needed to operational procedures, methods for valuation and presentation of dollar-denominated data items, comprehensive analysis of the comparability between new ACS data and corresponding data from the 2000 Census long-form and 2004 supplementary survey, and the provision of user guidance on multiyear averages. Despite the Census Bureau’s early identification of issues critical to the successful replacement of the 2010 Decennial Census long form as the new source of small geographic area data, we found that its plans to resolve these issues have been only partially completed. Furthermore, we found that despite recent changes to the ACS implementation schedule, it is not fully synchronized with the Census Bureau’s time schedule for implementing the testing program for the 2010 Decennial Census. Consequently, if these issues are not resolved in a timely manner, the Census Bureau’s plan to replace the 2010 Decennial Census long form with the 2008–12 ACS averages for detailed geographic areas will be jeopardized. It is standard practice to use independent controls for population and housing characteristics to correct the results of sample surveys for the effects of nonresponse and undercoverage. For the 2000 Census long form, characteristics from the 2000 Census short form were used as independent controls down to the tract level. For the annual ACS supplementary surveys, characteristics from ICPE were used as the independent controls. Independent controls for the full ACS will require a new methodology. Short-form data are available only once every 10 years, and the annual ICPE estimates do not provide data for the detailed geographic areas needed to prepare long-form detail and do not use the ACS residence concept or reference period. The new methodology is critical to the reliability of the ACS estimates of small geographic areas that ICPE does not provide and of areas that have large numbers of seasonal residents. Census Bureau staff have long recognized the need for the new methodology. For example, a 1995 paper by Love, Dalzell, and Alexander expressed concern about population controls and residence rules as well as the need for consultation with users on these topics. They reported that the Census Bureau was planning to conduct research using data from the 1996 test sites to produce controls at the census tract and block group levels. They also noted that the Census Bureau would need to conduct research on the residence rule. A 2000 paper by Alexander and Wetrogan also discussed the issue of population controls. They reviewed possible methods for using ICPE to develop controls for the ACS and noted the need to consult with users on how to present information on the differences in ACS controls and ICPE in ACS publications. Key stakeholders, including experts on the ACS we interviewed in August 2003 (listed in appendix I), expressed similar concerns about the methodology. It appears that no progress had been made on a new methodology until the Census Bureau reported in October 2003 to its advisory committees on the status of a new methodology to derive controls. It announced that when full ACS collection starts in November 2004, (1) interim procedures would be used and (2) a final methodology would not be determined until after the necessary research was completed. The Census Bureau did not provide a date when the methodology would be incorporated. In our review of Census Bureau presentations about the new methodology (described in detail in appendix V), we found that it had no plans to maintain time-series consistency of the population and housing controls by routinely incorporating the regular revisions to ICPE estimates into the ACS. Without such revisions, there could be a significant lack of comparability in the ACS data being averaged, and the reliability of multiyear estimates would be reduced. For example, without such revisions, the 2008–12 averages that are to replace the 2010 Decennial Census long form would be based on controls extrapolated from the 2000 Census for 2008–09 and controls from the 2010 Census for 2010–12. In addition, time-series consistency in the annual ACS data would be reduced, especially in the data for 2010 and previous years. Census Bureau officials told us that they were not planning any such revisions, unless the inconsistencies between 2010 ICPE and 2010 Census characteristics were significant, even though there were significant inconsistencies between the 2000 ICPE estimates and the 2000 Census data, especially for small geographic areas. We found that regularly incorporating all revisions to ICPE into the ACS would improve ACS reliability and that planning would give users advance notice on the Census Bureau’s revision practice. The need for such planning is critical, as evidenced by the failure that occurred in January 2004, when a revised set of ICPE data was incorporated into the calculation of monthly CPS data on employment. Initially, the revised employment estimates were released without a revision of the pre-2004 data, resulting in a significant discontinuity between December 2003 and January 2004. As a result of users’ dissatisfaction about the discontinuity, a consistent set of employment estimates was released. Finally, failure to adequately involve stakeholders in the decision process may contribute to significant misunderstanding about the use of the ACS estimates and corresponding estimates from the Decennial Census. In past decennial censuses, except for the very smallest geographic areas, the population and housing characteristics data published as part of the long- form detail were the same as the official data based on data collected on the short form. Because of differences in the residence and reference period concepts and the use of multiyear averages for small geographic areas, there will be less consistency between the ACS averages for 2008-12 and the 2010 Census data. The Census Bureau has identified operational issues with the ACS test programs, primarily from its evaluation studies on the 2000 Decennial Census and Census Bureau staff research papers on comparisons between data collected in the ACS 2000 Supplementary Survey and the 2000 Decennial Census long form. These issues (described in detail in appendix V) include problems with questionnaire design, nonresponse followup, and data capture, as well as coverage of persons living in group quarters. For example, the Census Bureau conducted a study to evaluate the design of the ACS questions that are needed to implement the residence concept and reference period for the ACS. The study suggested that additional testing was needed for the questions about multiple residences and noted “that asking these questions on a person basis may produce different and probably better data than asking them on a household basis.” Similarly, the authors found potential problems with the identification of seasonal residents. We were not able to identify in the Census Bureau’s plans whether these issues would be addressed before implementation of the full ACS. We also found, for the implementation of the full ACS for 2005, that the Census Bureau had addressed only the inclusion of group quarters and that it may not resolve the issue of questionnaire design until 2010. In addition, even for group quarters, it is planning for improvements that may not be included until 2010. Furthermore, not all problems have been identified because of the delays in the Census Bureau’s completing the evaluation studies of comparisons of long-form and ACS data items. Moreover, the Census Bureau’s plans do not provide for external consultations on key decisions about resolving issues. Although the Census Bureau has acknowledged the importance of the timing of incorporating changes to resolve the various issues, any delay in implementing solutions to 2010 would not meet the needs of the ACS collection and production schedule. For example, in its March 2003 ACS operations plan, the Census Bureau recognized the need for maintaining questionnaire continuity to calculate consistent multiyear averages. It also has reported that it needs to incorporate changes in the ACS questionnaire no later than 2008 because changes introduced after 2008 and before 2013 would create inconsistencies in calculating the 5-year averages that are to replace the 2010 Decennial Census long form. Nevertheless, we found that the Census Bureau’s current time schedule does not call for resolving issues such as questionnaire design before 2008. Incorporating changes into the ACS beginning with 2008 will help maintain the reliability of the 5-year averages for small geographic areas; failing to incorporate them beginning with 2005 will reduce the reliability of the annual changes in the ACS data. With regard to external consultation, we found that the Census Bureau’s plans do not include time for consulting with stakeholders and users, despite NAS, BLS, and Census Advisory Committee suggestions and recommendations. For example, in a February 15, 2001, report to the Census Bureau, the NAS Panel on Research on Future Census Methods recommended that it conduct evaluation studies on “the effectiveness of operations used to designate special places and enumerate the group quarters and homeless populations.” Members of the Census Advisory Committee had raised similar concerns. In a 2003 report prepared for BLS, their consultant had made a number of recommendations about the questions on employment. We found that the Census Bureau needs to develop a time schedule so that changes can be introduced to minimize inconsistencies between the 2005 and subsequent ACS data and to ensure that all necessary changes are made so that the ACS data for 2008–12 that will replace the 2010 Decennial Census long form will be collected consistently. In addition, the prompt completion of the ACS—long-form comparison studies and related evaluations will provide sufficient time for the Census Bureau to consult with stakeholders and to provide users with the information they need to understand the effect of making changes to the ACS questionnaires or procedures between 2005 and 2008. When the Census Bureau began releasing data from the ACS test programs, all dollar-denominated items such as incomes, housing values, rents, and housing-related expenditures were adjusted for inflation. As in the ACS test programs, only annual estimates with this adjustment will be presented, and when the Census Bureau releases ACS data for each new year, it revises all dollar-denominated data for prior years. It makes a similar inflation adjustment for the annual income data collected in the CPS, but it releases the unadjusted estimates. The Census Bureau also has decided to continue to adjust annual data collected each month in the ACS to a calendar year basis. It will be using the CPI for the annual and monthly adjustments for all geographic areas. The treatment of dollar-denominated data items is critical to all users of these data. It is particularly critical for federal agencies that will be using the ACS instead of the long form for many government programs to determine the allocation of funds or program eligibility. It is also critical to users of dollar-denominated items for small geographic areas because the inflation adjustments under the current procedure are based on a national average index. In our review of the development and implementation of the ACS, we identified questions on the appropriateness of the methodology for the adjustment and the suppression of the unadjusted annual values. A report prepared for HUD found problems with the calculation of the adjustment and the use of the adjustment for income measures used for HUD programs. The report also noted that the lack of the unadjusted annual data would severely limit HUD’s use of calculations appropriate to its program needs. Research by Census Bureau staff questioned the adjustment for incomes when they found that it was a probable source of difference between income data from the supplementary survey and corresponding data from the CPS and the 2000 Census long form. (We discuss these findings in detail in appendix V.) Our statisticians reviewed these findings and found a similar problem with the calculation of the adjustment because of the lack of trending adjustment. We found that the Census Bureau could estimate calendar year values using a combination of past trends in related series, information from other ACS respondents, or known information such as changes in cost-of-living adjustments for various transfer payment programs and changes in wage rates. We also found that converting ACS data from monthly to calendar year data is similar to conversion issues faced by other agencies that collect annual statistics compiled on a fiscal- year basis and that the procedures these agencies use could be adapted for the ACS. With regard to the use of a national cost-of-living adjustment, we have previously reported that for purposes such as allocating federal funds to states using income and poverty data, the CPI, a national measure of inflation, does not reflect variations in geographic areas. Census Bureau staff have reported similar findings. The HUD and Census Bureau findings and our review raise serious questions about the inflation adjustments. We found no documentation explaining the rationale for the adjustment for either the ACS or the CPS, where its use is limited to income data. Bureau officials informed us that alternative procedures had not been examined and that stakeholders or users had not been consulted on the adjustment. We noted above that one of the Census Bureau’s major justifications for the ACS test programs has been its comparing data collected in these programs, and corresponding data from the 2000 Decennial Census short and long forms, to identify operational problems. Another major justification for the ACS test programs has been the use of these comparisons, and comparisons with corresponding data from the CPS, to inform users in making the transition from the 2000 long form to the ACS. “These data will also contribute to a comparison with data from Census 2000 that is necessary because there are differences in methods and definitions between the census and the ACS. Moreover, decision makers will want to compare an area’s data to those from Census 2000. Comparisons using data from the operational test and from the 31 sites are essential to determine how much measured change between Census 2000 and future years of the ACS is real and how much is due to operational differences between the ACS and the census.” Despite acknowledging the importance of these comparisons, the Census Bureau’s publication of evaluations of the comparisons has been delayed, and their scope has been reduced in terms of levels, data items, and time period. The lack of information will create problems for ACS users who will be comparing the annual ACS data for 2005 (to be released in mid- 2006) with 2000 Decennial Census data or comparing annual ACS supplementary survey data beginning with 2000. In addition to delaying the release of the evaluation studies, the Census Bureau has reduced their scope. For the evaluations of ACS test site data, local experts did not participate in the evaluation of the comparisons for 27 of the 31 test sites. For the 4 test sites that were studied by local experts, they did not cover subcounty local government units. For evaluations of ACS supplementary survey data, the Census Bureau has eliminated the analyses of comparisons between (1) the 2000 supplementary survey and the 2000 long form for geographic areas with populations of 250,000 or more and (2) the supplementary surveys for 2000–02 to corresponding data from the CPS. It has reduced the scope of its evaluation studies by also eliminating comparisons of single-year estimates for most subnational areas and comparisons of data items such as financial characteristics of housing. NAS found that the Census Bureau has not placed sufficient priority on completing the necessary evaluation studies. Furthermore, we found that the Census Bureau does not have a plan that includes the timely completion of all the studies. Once the studies are complete, it will need to incorporate the findings into ACS operations, consult with stakeholders, and provide users with the information they need to make the transition from the long form to the ACS. The plan will be needed to ensure that as many changes as possible can be introduced before the first annual ACS estimates are published in 2006 and that all necessary changes are implemented before 2008. We found that the delays in completing the evaluations and their reduction in scope are likely to affect the use of the ACS in improving the small geographic area estimates of unemployment and poverty. For example, Labor uses the unemployment data extensively to administer a variety of federal programs. Several other departments use the poverty rates for similar purposes. One of the major differences between the ACS and the long form is that the ACS will provide data for geographic areas with populations smaller than 65,000 in terms of multiyear averages. Experts outside and inside the Census Bureau have identified serious issues regarding the statistical properties of multiyear averages and have recommended that the Census Bureau provide guidance to federal agencies and others on their use. We found that stakeholders have urged the Census Bureau for many years to provide guidance on the strengths and weaknesses of these averages. The most recent request for guidance on using multiyear averages came in the July 2003 report by the NAS Panel on Research on Future Census Methods: “The Census Bureau should issue a user’s guide that details the statistical implications of the difference between point-in-time and moving average estimates for various uses.” In the report’s executive summary, the panel also stated that “The Census Bureau must do significant work in informing data users and stakeholders of the features and the problems of working with moving average-based estimates.” It also expressed particular concern about the use of the multiyear (or moving) averages in fund allocation formulas. Stakeholders have requested guidance on topics such as (1) the reliability of multiyear averages for areas with rapidly changing populations, (2) the reliability of trends calculated from annual changes in multiyear averages, and (3) the selection of ACS data for geographic areas with populations larger than 20,000 for which there will be multiple estimates. The Census Bureau has recognized the need for such guidance but has not announced any information about its contents or when it might be available, even though the guidance is needed well in advance of the release of the first multiyear averages in 2008. We also found that plans for research to evaluate the statistical properties of multiyear averages are limited. The contracts to evaluate 3-year averages for the ACS test sites cover only averages for 1999–2001, with no comparisons with averages for 2000–02, 2001–03, or 1999–2003. In addition, the evaluation studies discussed earlier lack any time-series dimension, such as comparisons of the supplementary surveys with annual data from the CPS. Thus, it appears that the Census Bureau has missed the opportunity to test (1) distortion and stability in multiyear averages, (2) differences between multiple estimates for the same geographic areas, and (3) the use of annual ACS data for small geographic areas. We found that in recent years, the Census Bureau has used its outreach efforts with stakeholders and users primarily to gain support for the ACS. Although it also has solicited advice from NAS panels, advisory committee members, and experts at workshops and conferences on some of the issues we have identified in this report, there is no indication that the Census Bureau will be following this advice. (For additional information, see appendix V.) Likewise, it has not yet followed similar advice from us, other government agencies, or even its own staff. It has been more than a year since the Census Bureau announced, in March 2003, that it was looking into establishing an ACS partnership program that would involve advisory groups and expert panels to help it improve the program. We found that no such program has been established yet. Given that many key issues remain unresolved and that the Census Bureau has no plans to seek advice on resolving them, key aspects of the ACS will receive little or no input unless the Census Bureau revises its plans. In 1994, the Congress began to fund testing of the survey to replace the 2000 Decennial Census long form, beginning with the 2000 Census. In reviewing the development of the ACS, we found that the Census Bureau was planning to replace the 2000 long form by starting the ACS program with an annual sample of 4.8 million housing units for 1999, 2000, and 2001 and reducing the sample for subsequent years to 3 million. The larger sample would have provided 3-year averages for all small geographic areas for 2000 and would have provided data for the smallest geographic areas of the same quality as the traditional long form. In fiscal year 1998, plans to introduce the ACS to replace the census long form were delayed until after the 2000 Census was completed. When the Census Bureau submitted its plans in 1998 to replace the long form for the 2010 Decennial Census, a similar increase in sample size for 2009–11 was not proposed. Thus, compared with the plans for 2000, data for small geographic areas for 2010 would be delayed by a year and would be based on 5-year averages. When we reviewed the previous plan and other alternatives to the proposed ACS that would provide more timely and reliable data for small geographic areas, we determined that the only viable alternative to the current plans would be to expand the sample size for 2009–11, as proposed earlier. This expansion would allow the Census Bureau to publish data for geographic areas with populations smaller than 20,000 a year earlier, and it would provide more reliable small- area data than under the currently planned 5-year averages. In addition, if the Congress were to provide the additional funds for this alternative, an additional year would be available for the Census Bureau to resolve issues we have identified in this report by giving it until the collection of data for 2009 rather than for 2008. According to Census Bureau estimates, increasing the sample size for the 3 years would add about $250 million to the estimated $500 million cost for the 3 years, using the smaller sample. The most recent Census Bureau schedule for implementing the ACS over the complete cycle of the 2010 Decennial Census was prepared in December 2003. Except for the completion of the questionnaire for the 2008 ACS, the milestones do not cover the resolution of issues that it has already identified and issues we identify in this report. (See table 1.) Ideally, all these issues should be resolved before the first annual results of the full ACS sample are released. However, the Census Bureau has already announced that final plans for calculating independent population and housing controls with ACS residence and reference concepts will not be available for several years, the 2004 test plans for the 2010 Decennial Census will cover group quarters and residence rules, reports from the 2004 tests will not be completed until 2005, and the 2006 test plans for 2010 also cover group quarters. In addition, the Census Bureau has announced that comparisons of 2000 ACS and 2000 Census long-form data critical to the transition to the full ACS will be limited. Nevertheless, users who need the evaluation of these comparisons to compare data from the 2000 Decennial Census long form with data from the new ACS data or from the ACS supplementary surveys would benefit from the early resolution of other issues. For example, resolving issues before the release of the first 3-year averages (2005–07) would improve the consistency between these averages and the subsequent ACS data. Resolving all issues for the 2008 ACS is critical if these data are to be fully consistent with the ACS data for 2009–12 and the 2008–12 averages are to be fully consistent with the 2010 Decennial Census short-form data. As we noted above, the Census Bureau’s schedule does call for timely completion of the 2008 questionnaire. However, if questions to be included in the 2010 Census short form are changed during the congressional and OMB approval processes, currently scheduled for 2008 and 2009, data collected on the 2010 Census short form will be inconsistent with the ACS data. The Census Bureau’s development of the ACS goes back several decades and has included intensive research and field testing programs, as well as substantial outreach efforts, in particular through the reports and workshops at NAS. However, its current plan to begin full implementation of the ACS for 2005 has several critical deficiencies. The Census Bureau has not completed its testing program, and it has not acted to resolve key issues already identified by the ACS test program, by evaluation studies of the 2000 Decennial Census, by Census Bureau research studies, and by stakeholders and users, including us, NAS, and other federal agencies. Furthermore, the ACS implementation plan and the 2010 Decennial Census test programs are not synchronized, and there is no comprehensive program for external consultation on the resolution of these issues. Without prompt resolution of issues such as those relating to the calculation of independent controls for small geographic areas and the consistency of data used to calculate multiyear averages, the ACS will not be an adequate replacement for the long form in the 2010 Decennial Census. If the Census Bureau is not able to use the ACS to replace the long form, the Congress and other stakeholders need to be advised in 2005 in order to allow for the Census Bureau time to reinstate the long form for the 2010 Census. To ensure that the ACS is an adequate replacement for the Decennial Census long form, we recommend that the Secretary of Commerce direct the Census Bureau to (1) revise the ACS evaluation and testing plan and focus on the issues we have identified in this report; (2) provide key stakeholders, such as the National Academy of Sciences, with meaningful and timely input on decisions relating to these issues; and (3) make public the information underlying the Census Bureau’s decisions on these issues when it makes the decisions. We also recommend that the Secretary direct the Census Bureau to prepare a time schedule for the 2010 Decennial Census that provides for resolving these issues by incorporating all operational and programmatic changes into the 2008 ACS so that the 5-year averages for 2008–12 will adequately replace the 2010 Decennial Census long-form data for small geographic areas. These revisions should be reflected in the single, comprehensive project plan for the 2010 Census, as we have previously recommended. In written comments on a draft of this report, the Secretary of Commerce provided comments on our recommendations. (The Secretary’s comments are reprinted in appendix VI.) He disagreed with our recommendation that the ACS evaluation and testing plan needed to be revised to focus on issues we have identified in this report, stating that the current ACS testing and evaluation plan already included these issues. In following up on the Secretary's response, we learned that there is not yet a written plan, but only a rough outline of the types of work planned. Therefore, we believe our recommendation remains valid. The Secretary did not accept our recommendation to provide key stakeholders more direct and timely input into decisions on these issues because he believes that the present consultation process is adequate. We disagree, because as noted in Appendix II of our report, the Census Bureau has not been responsive to recommendations from several National Academy of Sciences reports relating to the ACS. The Secretary agreed with the recommendation that the Census Bureau provide public documentation for key decisions on issues we have identified in this report. The Secretary did not respond directly to our recommendation that he direct the Census Bureau to prepare a schedule for the 2010 Census that ensures that all necessary changes are made in time for the 2008 ACS so the 5-year ACS averages for 2008-2012 will be an adequate replacement for the 2010 long form for small geographic areas. The Secretary provided comments on the five major outstanding issues that, in our view, jeopardize the ACS as a replacement of the long form: lack of methodology for independent controls, operational issues not addressed, questionable plans for dollar-denominated items, incomplete evaluations and lack of information on ACS time-series consistency, and lack of information about multiyear averages. The Secretary disagreed with our findings about the lack of a methodology for independent population and housing controls. He stated that a methodology for the ACS was already in place. On the issue that changes to that methodology are needed to account for the difference in the ACS residence concept, the Secretary agreed that a change was needed but stated that it could be delayed for several more years. On the issue of independent controls for subcounty areas, he stated that the Census Bureau had no plans to develop such controls, which we found were used for the 2000 Census long form, but that it might develop such controls using data from the ACS or administrative records. However, he did not respond to our findings about the use of existing subcounty area data from the ICPE or from the 2010 Census short form. The Secretary stated that the Census Bureau also had no plans to revise the ICPE. On the issue of the ACS reference period, the Secretary reported that the Census Bureau had recently decided to assume that July 1 would be used as the reference period. The Secretary did not comment on GAO’s findings about the lack of plans to incorporate into the ACS 2010 Census data and related revisions to the ICPE estimates for previous years. We disagree with the Secretary’s comments about the independent subcounty population and housing controls and believe that their use in the ACS is needed for the ACS to be an adequate replacement for the 2010 Census long form for small geographical areas. We found that independent controls from the 2000 Census short form were used for detailed geographic areas for the 2000 Census long form and that differences in counts of population and housing (occupied and vacant) between the long form and the short form were limited to the smallest geographic areas. The similar use of 2010 Census short-form counts in the ACS also would minimize differences in these counts from the ACS and the 2010 Census. Consequently, we disagree with the Bureau’s plan not to commit to the development of subcounty controls and its plans not to base these controls on ICPE total population and housing estimates, which are prepared annually for all general government units, and the more detailed and reliable data from the 2010 Census short form. We also disagree with the Secretary that the implementation of a new methodology for independent controls with subcounty controls and the new residence concept can wait until 2008. As we noted in our report, we found that controls for subcounty areas with population of more than 65,000 will be needed before the 2005 ACS estimates are released for these areas in 2006 and that controls for subcounty areas with populations of more than 20,000 will be needed before the first multiyear averages are released in 2008. (For the 2000 Census long form, controls for most areas of this size were from the 2000 Census short form.) With regard to the new residence concept, a decision to delay introducing a new methodology until 2008 would create time-series inconsistencies between 2000–2007 and 2008 and subsequent years. These inconsistencies could be very significant for geographic areas with a large population of seasonal residents. The Secretary did not comment on our findings about the need for a methodology to revisions relating to the ICPE into the ACS. We found that this methodology, which is important to both the time-series consistency of the annual ACS estimates and to the multiyear averages, is not covered by the current ACS methodology, but that it will be needed when the 2010 Decennial Census short form data become available. We found that it has been the Census Bureau’s practice for the ICPE, whose estimates are used as the independent controls for the ACS, to be benchmarked to the decennial census short-form data and that it uses similar practices for many other Census Bureau programs. For the ICPE, the Bureau will replace the 2010 ICPE estimates with the 2010 Census data, and use the differences in these estimates to revise the ICPE estimates back to the previous benchmark year, which for 2010 will be 2001. (Table 4 of our report shows the impact of benchmarking on county population data for 2000.) It should be noted that we found that this practice is not followed in all Census Bureau programs. For example, for the Annual Economic and Social Supplement to the CPS, the Census Bureau introduced the benchmark information from the 2000 Decennial Census into the 2001 estimates and presented the data on both the old and the revised basis. This approach, to present estimates on an old and new basis for a single year, may be appropriate for an annual survey. However, GAO found that because of the use of multiyear averages in the ACS, it is imperative that the ACS estimates for all years beginning with 2001 be revised. Without such a revision program, ACS estimates for 2010, which we assume will not be released until the 2010 Census short-form data have been incorporated, will be inconsistent with the 2009 estimates. In addition, the ACS estimates for 2008 and 2009 used to calculate the 5-year averages that will replace the 2010 Census long form will be based on controls that are inconsistent with those for 2010–12. Based on the revisions for 2000 shown in our report, there could be many significant inconsistencies, especially for small geographic areas. Although the Secretary did not comment on the issue of revision, in its technical comments on our draft report, the Census Bureau reported (comment 22) that with regard to incorporating 2010 Census data, it has decided “to make appropriate changes to the population controls when necessary, including the possibility of reweighting the data around the 2010 time period and for all multiyear estimates.” We disagree with the Census Bureau’s approach primarily because it is not consistent with the practices used by the Census Bureau to incorporate census data into surveys and programs such as the ICPE and monthly retail sales that are controlled or benchmarked to a census or similar data set. For these surveys, it revises all previously published data on a predetermined schedule using a transparent statistical procedure. Most important, these procedures do not depend on the size of revisions, which can only be determined after a benchmark is completed. Regardless of the benchmarking procedures adopted for the ACS, we believe that the Census Bureau needs to have extensive consultation with external stakeholders to make its decision. In addition, because of the complexity of most benchmarking procedures, the Census Bureau needs to begin this consultation as soon as possible. With regard to the recent Census Bureau decision about the reference period for the ACS, we are pleased that a decision has been made because any delay in this decision would have resulted in additional time-series inconsistencies in the ACS. We have changed our report to reflect this decision. Unfortunately, we have no documentation on the research underlying the decision and, as has been the case in other key decisions, we do not believe that there was any public discussion of this decision. The second issue identified in our report related to the operational aspects of the ACS, including questionnaire design and the collection of data for persons living in group quarters. On these issues, the Secretary limited his comments to the questionnaires and addressed our findings that improvements identified as part of the 2000 Census cognitive testing research and research based on comparisons of ACS and 2000 Census long-form data would not be completed until 2008. The Secretary noted that the Census Bureau has resolved the issue of finalizing the ACS questions, including the questions to be asked on the 2010 Census short form before 2008. Although this recent decision appears to have resolved the scheduling issue, we believe that uncertainties remain as to whether this schedule can be met. For example, the ACS milestones in the latest available schedule call for final approval of the questions by the Congress and by OMB in 2008 and 2009, respectively, so that any changes made as a result of these steps would not be incorporated into the 2008 questions. As the Census Bureau has recognized, failure to maintain consistency in the questions for the 2008-2012 ACS will result in inconsistencies in the 5- year averages centered on 2010, which are the averages designed to provide the small geographic area data that would have been collected on the 2010 Census long form. In addition, the recently released ACS evaluation reports identify issues on which new research is necessary, including the issues with the questions on disability identified in our report, but the Census Bureau has not indicated its plan to complete this additional research or to consult with stakeholders about decisions related to the research. Although the Secretary did not comment on our findings with regard to group quarters, we remain concerned that the work on group quarters being conducted as part of the 2004, 2006, and 2008 tests for the 2010 Census will not be reflected in the ACS beginning with 2008. Our report also identified as unresolved issues the two inflation adjustments that the Census Bureau is applying on all dollar-denominated ACS items. The first adjustment is used to convert annual data collected each month in the ACS to a calendar year basis. This adjustment recognizes that the annual data collected in the ACS are for different periods because the data are collected monthly and cover the previous 12 months. The second adjustment is used to present dollar-denominated items in dollars of the most recent calendar year. This adjustment eliminates the impact of inflation when comparing data across years. The index used for both adjustments is the national-level CPI. The Secretary correctly observed that the CPI is a generally accepted measure of inflation and that most federal programs that allocate funds do not use regional measures of inflation. However, these observations did not directly address GAO’s findings about the adjustments or the concerns raised by HUD in its report on future use of the ACS, which are discussed in appendix V of our report. For example, the Secretary did not address our finding about a lack of a rationale for adjusting items other than incomes for changes in overall inflation rather than adjusting with indexes, such as wage rates or rent, that are directly related to the item being adjusted. He did indicate that the Census Bureau would reconsider its present policy of showing only the inflation-adjusted annual estimates and multiyear averages. We believe our findings about the need for the Census Bureau to provide a comprehensive rationale for the two adjustments still apply. The Secretary disagreed with the issue we identified on completeness of the Census Bureau’s comparison and evaluation reports. He noted that after our draft report was completed, the Census Bureau released seven additional comparison reports and that it planned to prepare additional reports to evaluate issues we identified on the time-series consistency of the annual ACS estimates. However, despite earlier statements by the Census Bureau to compare and evaluate differences between state-level estimates from the Census 2000 Supplementary Survey (C2SS) and the 2000 Census long form, these reports did not include any reference to the preparation of such comparisons, and the Secretary did not indicate they would be prepared. Because the focus of the long form and the ACS is on data from small geographic areas, we believe that reports on states and on other areas with population of 250,000 or more should be prepared. The last issue we identified was the need to provide users with guidance on the interpretation of key properties of multiyear averages. The Secretary agreed about the need but noted that guidance is not needed in 2005. He reported on a newly created NAS panel that will be studying many of the key issues identified in our report. However, we believe that the Census Bureau should begin to release guidance on the averages before the first multiyear averages are released in 2008. One area in which such guidance will be needed is the interpretation and use of the multiple ACS estimates. When the 2005–07 averages are released in 2008, users will have annual estimates for some of these areas for 2006 as well as the 3- year averages, which will be centered on 2006. In 2010, when the first 5- year averages are released (2005–09), users will have three sets of ACS estimates for places with populations larger than 20,000. For example, for each state, there will be an annual estimate for 2007 as well as 3-year and 5-year averages centered on 2007. The comments from the Secretary also include a list of detailed technical comments from the Census Bureau. We reviewed each of these comments and revised the report where appropriate. As agreed with your offices, unless you release the report’s contents earlier, we plan no further distribution of it until 30 days from its issue date. We will then send copies to the Secretary of Commerce, the Director of the U.S. Census Bureau, and others who are interested. Copies will be made available to others on request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-9750. Other staff who made major contributions to this report are listed in appendix VII. We used a combination of approaches and methods to examine the Census Bureau’s plans to develop, test, and implement the American Community Survey (ACS). We reviewed published and unpublished ACS-related Census Bureau reports, papers, presentations, budget documents, and congressional testimony; National Academy of Sciences (NAS) reports; congressional testimony delivered by outside experts; and consultants’ reports prepared for the Census Bureau, the Bureau of Labor Statistics (BLS), and the Department of Housing and Urban Development (HUD). We reviewed an extensive set of internal planning documents prepared between 1992 and 1995 that the Census Bureau provided, relevant papers Census Bureau staff presented at professional association meetings and similar symposiums from 1995 on, and evaluation reports based on the 2000 Census. We also reviewed official Census Bureau presentations in special reports, congressional testimony, and recent advisory committee meetings. We reviewed similar materials NAS and consultants prepared for the Census Bureau and other federal agencies, as well as materials we prepared. The most important documents we reviewed are listed in the bibliography, organized by document type, at the end of this report. In addition, we conducted independent research and analysis. To assess the evaluations the Census Bureau conducted to assist users in making the transition from the 2000 Census long form to the ACS, we obtained data from the 2000 Census and 2000 ACS (the Census 2000 Supplementary Survey) and prepared comparisons of key detailed data items at the state level. To determine the potential effect of replacing independent population and housing characteristics controls from the 2000 Census with corresponding data from the 2010 Census, we compared county-level intercensal estimates for April 1, 2000, based on the 1990 Census, with 2000 Census counts. We also analyzed the Census Bureau’s use of independent controls for estimates of population and housing characteristics for previous decennial censuses and its plans for the ACS. “Although we believe that the proposed continuous measurement system deserves serious evaluation, we conclude that much work remains to develop credible estimates of its net costs and to answer many other fundamental questions about data quality, the use of small- area estimates based on cumulated data, how continuous measurement could be integrated with existing household surveys, and its advantages compared with other means of providing more frequent small-area estimates. In our judgment, it will not be possible to complete this work in time to consider the use of continuous measurement in place of the long form for the 2000 census.” “With regard to proposals to drop the long form in the next decennial census and substitute a continuous monthly survey to obtain relevant data, substantial further research and preparatory work are required to thoroughly evaluate the likely effect and costs of these proposals. Continuous measurement deserves serious consideration as a means of providing more frequent small-area data; however, the necessary research and evaluation cannot be completed in time for the 2000 census.” Although 1994 saw the first proposals to implement the continuous measurement methodology as a replacement for the 2000 Census long form, the Census Bureau changed its plans in 1998, shifting to implementation to replace the long form in 2010. Since 1995, NAS has produced several reports that relate totally or in part to the ACS, including a summary of a September 13, 1998, Committee on National Statistics workshop at NAS, two interim reports, a letter report, and a final report, by the Panel on Research on Future Census Methods, and a report released in early 2004 by the Panel to Review the 2000 Census. (In this appendix, we do not discuss NAS reports after 1995 in which the ACS was discussed as a potential data source for federal programs.) With few exceptions, the members of these two NAS panels and the workshop participants reported findings that cover most of what we have identified as unresolved issues and summarize in this appendix. The NAS reports and ours differ somewhat in emphasis. We have focused on the production and use of ACS data, whereas NAS focused more on data collection and processing methodologies. These differences may reflect the fact that NAS panel members are very sophisticated users who are more likely to use ACS microdata files and make their own adjustments for methodological issues; they make little use of the regular ACS publications. NAS sponsored a 1-day workshop in September 1998 to discuss methodological issues related to the ACS. Experts prepared “thought pieces” on issues NAS staff selected, with input from Census Bureau staff. The workshop’s specific discussion topics were combinations of information across areas and across time, funding formula, weighting and imputation, sample and questionnaire design, and calibration of the output from this survey with that from the long form. The thought pieces and comments on them prepared Census Bureau staff for the discussions at the workshop. “has been on refining data collection, leaving the final answers to the difficult analysis questions for later. Thus, procedures for nonresponse and undercoverage adjustment were modeled, to the extent possible, after current procedures used for the census long form. Now that data collection has matured as the ACS demonstration phase is well under way, the Census Bureau is developing a research plan and initiating research to address all issues relating to ACS methodology. Fall 1998 therefore seemed an opportune moment for a workshop to assist the Census Bureau in developing a research agenda to deal with many of these challenging issues.” The report contained no specific recommendations but identified areas where additional research was needed, including issues we have expressed concern about, such as the availability of multiple ACS estimates for geographic areas with populations larger than 20,000 and the likelihood of differences between ACS estimates and estimates from a Decennial Census short form. From our perspective, the most relevant of the workshop’s specific issues were (1) combining information across time, (2) weighting and imputation, and (3) calibrating the output from this survey with that from the long form. Technical papers in the workshop’s agenda book contained considerable discussion of time-series issues. The discussion in this section of the workshop focused on replacing moving averages with time-series modeling and using current household survey data to develop models. Speaking for the Census Bureau, Alexander stated that “Our current plan is to release annual data for even very small areas and let users perform their own time series analyses. We welcome ideas about what the Bureau’s role should be . . .” “to make comparisons between the long form and ACS for all states, large metropolitan areas, large substate areas, and population groups. “The objective of the 1999–2001 comparison is to understand the factors associated with the differences between the 1999–2001 ACS and the 2000 long form in the 31 areas, using the second comparison study to develop a calibration model to adjust the 2000 long-form estimates to roughly represent what the full ACS would have yielded in 2000.” Chapter 7 of the report was devoted to a discussion of calibration. The report stated that the model would “determine the effects that would be expected when switching from the long-form estimates to those from the ACS on various applications of long-form data.” Once adjusted, the calibrated long-form data for 2000 can be compared with ACS data that are collected following full field implementation in 2003, “in order to understand the dynamics over time of such characteristics as poverty and employment.” “We very much like the idea of viewing information from an ongoing comparison of ACS to CPS and other surveys as a way to help understand how the ACS ‘error profile’ might be changing over time and using this to help interpret ACS data in the context of the long-term time series of census estimates.” The use of independent controls for population and housing characteristics was also discussed at the workshop, but very generally, because the Census Bureau had not yet developed proposals for the controls. For example, the report’s chapter 5 discussed improving the existing population controls. The Census Bureau reported discomfort with the quality of the existing county-level controls (from ICPE) and agreed that the ACS could be used to improve these estimates. The Census Bureau also acknowledged that differences in residence rules and reference period would complicate the calculation of population weights. However, no discussion was reported of how the population counts from the 2010 Census would be used. “the development of estimates that (a) sum to estimates at higher levels of geographic aggregation and (b) more closely approximate direct estimates at higher levels of aggregation . . . in the event that aggregate estimates are not constrained to (approximately) equal direct estimates (and also the release of direct estimates at lower levels of aggregation for analysis purposes) . . . .” The Panel on Research on Future Census Methods, sponsored by the Census Bureau, was formed to examine alternative designs for the 2010 Census and to assist the Census Bureau in planning tests and analyses to help assess and compare their advantages and disadvantages. In addition to the first interim report, Designing the 2010 Census, released in 2000, a letter report was issued in 2001, and a second interim report was issued in 2003 (both discussed below). The panel issued a final report in 2004. “The Census Bureau should develop a detailed plan for each evaluation study on how to analyze the data collected and how to use the results in decision making concerning 2010 census design. The Census Bureau should then use these plans to identify the benefits and resources required for each evaluation study, set priorities among them, and allocate sufficient resources for the careful completion of all or, at least, the highest priority evaluations.” “The American Community Survey is a proposed national, continuous, mailout-mailback survey of 250,000 households per month, with field follow-up that makes use of techniques closely related to those used in the census. Therefore, rather than rely exclusively on the two or three large-scale census tests, which are always at least slightly limited in their generalizability by the specific locations selected, the Census Bureau could use the ACS as a platform for testing possible changes in the census. This work could serve as preliminary testing to the larger mid-decade tests for the census design.” “The decennial census makes use of one residence rule definition, the ACS uses a second, and a third approach is being tested in the alternative questionnaire study. As the Census Bureau is well aware (based on the allocation of an experiment to this issue), confusion over residence rules is a source of possibly substantial error in the census. . . . The Census Bureau needs to find the residence rule (within the set of rules satisfying legal and other restrictions) that results in the most accurate estimates. To learn more about this issue, the panel proposes an ACS-short-form match study in 2000 to examine this and other short- form measurement error issues.” “The development of the ACS raises a number of issues related to the quality of and planning for the 2010 census. There are also many other important technical issues raised by the introduction of the ACS into the federal statistical system. Formation of a technical working group could help to address many of these issues.” The 2001 letter report—addressed from Benjamin King, Chair of the Panel on Research on Future Census Methods, to William Barron Jr., Acting Director of the Census Bureau—was prepared in response to a December 7, 2000, presentation by Census Bureau staff on the major elements of the Census Bureau’s strategy for the 2010 Census. The panel recommended that the Census Bureau produce a “business plan” for the 2010 Census that would provide an overall framework for development. It recommended that this plan include (1) a statement of objectives, (2) a timeline for completing tasks, (3) a cost-benefit analysis, and (4) more complete information on coordinating tasks within the Census Bureau. The panel also recommended the preparation of specific types of evaluation studies. “The Bureau is currently conducting a wide array of evaluation studies and experiments designed to assess the quality of the 2000 census and inform approaches to the 2010 census. As noted above, the panel applauds the scope of these evaluation studies. However, the panel is concerned that the Bureau has not sufficiently focused its evaluation program and has instead labeled most of its evaluation categories as high priority.” “comparison of estimates from the ACS and 2000 census long-form data, in sites where both are available; coverage of the population, disaggregated by demographic and geographic subgroups; the effectiveness of major automated systems for data collection, capture, and processing; the quality and completeness of long-form data collection; and the effectiveness of operations used to designate special places and enumerate the group quarters and homeless populations.” “to broaden its justification for the ACS, detailing the need for and use of long-form data and how those data needs will be addressed through the ACS, perhaps in conjunction with the CPS and other demographic surveys. Accordingly, the Bureau should expedite ongoing evaluations that assess the quality of ACS data relative to the quality associated with the traditional census long form.” “The most basic question the panel faces regarding the ACS is whether it is a satisfactory replacement for the census long form. We recognize that significant estimation and weighting challenges must be addressed and that more research is needed on the relative quality of ACS and long-form estimates.” “The Census Bureau should carry out more research to understand the differences between and relative quality of ACS estimates and long-form estimates, with particular attention to measurement error and error from nonresponse and imputation. The Census Bureau must work on ways to effectively communicate and articulate those findings to interested stakeholders, particularly potential end users of the data. The Census Bureau should make ACS data available (protecting confidentiality) to analysts in the 31 ACS test sites to facilitate the comparison of ACS and census long-form estimates as a means of assessing the quality of ACS data as a replacement for census long-form data. Again, with appropriate safeguards, the Census Bureau should release ACS data to the broader research community for evaluation purposes. The Census Bureau should issue a user’s guide that details the statistical implications of the difference between point-in-time and moving average estimates for various uses. The Census Bureau should identify the costs and benefits of various approaches to collecting characteristics information should support for the full ACS not be forthcoming. These costs and benefits should be presented for review so that decisions on the ACS and its alternatives can be fully informed.” “The fact that the Census Bureau has not done more in comparing the data collected from the 31 test sites, the C2SS, and the 2001 and 2002 Supplementary Surveys with the data collected by the 2000 census long form is disappointing. Such analyses would help assess the quality of ACS data and would be helpful in making the argument for transition from the long form to the ACS. This deficiency is probably due to limited analytic resources at the Census Bureau and creates an argument for ‘farming out’ this analysis to outside researchers.” “The ramifications of this basic concept emerge when moving average estimates are entered into sensitive allocation formulas or compared against strict eligibility cutoffs. A smoothed estimate may mask or smooth over an individual year drop in level of need, thus keeping the locality eligible for benefits; conversely, it may also mask individual-year spikes in activity and thus disqualify an area from benefits. It is clear that the use of smoothed estimates is neither uniformly advantageous nor disadvantageous to a locality; what is not clear is how often major discrepancies may occur in practice.” “It is incorrect to use annual estimates based on moving averages over several years when assessing change since some of the data are from overlapping time periods and hence identical. At the least, the results will yield incorrect estimates of the variance of the estimates of change. Therefore, users should be cautioned about this aspect of the use of moving averages.” In both recommendations on evaluations and moving averages, the panel called for the Census Bureau to engage in a greatly expanded effort to inform users and stakeholders. It also suggested that the Census Bureau farm out some of the research efforts. “Eight years later, faced with the task of offering advice on making the vision of continuous measurement a reality in the 2010 census, the similarity between the arguments then and now is uncanny. Similar, too, are the points of concern; the current panel is hard-pressed to improve upon the basic summary of concerns outlined by our predecessors. We are, however, much more sanguine that a compelling case can be made for the ACS and that it is a viable long-form replacement in the 2010 census.” However, while the panel was identifying its concerns, it also supported full funding of the ACS, believing that existing “flaws” in the plan could be resolved. “the Census Bureau, the administration, and Congress agree on the basic design for the 2010 census no later than 2006 in order to permit an appropriate, well-planned dress rehearsal in 2008. In particular, this agreement should specify the role of the new American Community Survey (ACS). Further delay will undercut the ability of the ACS to provide, by 2010, small-area data of the type traditionally collected on the census long-form sample and will jeopardize 2010 planning, which currently assumes a short-form-only census.” “the Bureau should also study the effects of imputation on the distributions of characteristics and the relationships among them and conduct research on improved imputation methods for use in the American Community Survey (or the 2010 census if it includes a long-form sample).” “publish distributions of characteristics and item imputation rates, for the 2010 census and the American Community Survey (when it includes group quarters residents), that distinguish household residents from the group quarters population (at least the institutionalized component). Such separation would make it easier for data users to compare census and ACS estimates with household surveys and would facilitate comparative assessments of data quality for these two populations by the Census Bureau and others.” The panel’s findings were similar to our findings, with one major difference. The panel’s findings imply that some research on the ACS can be conducted after the results of the 2010 Census short form become available. In contrast, we see that such research is needed in order to improve the ACS by 2008, the first year in which ACS data will enter into the calculation of the 5-year average estimates (2008–12) that will replace the long form. In the decennial census for 1940 and for 1950, the Census Bureau used a single form to collect, from all households, population and key characteristics such as age and gender and, from a sample of households, detailed demographic, economic, and housing items. In the 1940 Census, the Census Bureau used a sample of 5 percent of the population to collect data on questions on income, internal migration, and Social Security status, as well as on more refined questions on unemployment. In addition, the Congress authorized a new set of questions about the types of plumbing, heating, and appliances in dwellings. Beginning with the 1960 Census, the first conducted by mail, it became necessary to use separate forms—a short form to collect population data from all households and a long form to collect the detailed items from a sample of households. In the 2000 Census, for example, the Census Bureau conducted a sample of 17 percent of the population and asked 45 questions on the long form. Since 1960, the long form has evolved into a cost-efficient way to collect data federal agencies need that minimizes respondent burden. For 2000, for example, the long form consisted of 45 questions that the Census Bureau developed working through OMB and with the consent of the Congress. Each question provided information required by statute. Thus, the 2000 long form provided all federal departments and agencies with critical data, and it was estimated that these data were used to allocate more than $200 billion in federal funds. In the 1950s, Census Bureau officials and users of Decennial Census data had begun to develop a program to provide intercensal data on population characteristics. The first major proposal to provide intercensal data called for a mid-decade census that would provide information every 5 years. In 1976, the Congress enacted legislation to require a mid-decade census beginning with 1985, but did not fully fund the program. In the late 1980s, the Census Bureau shifted efforts to provide intercensal estimates to a program based on CM methodology, or Continuous Measurement. This approach would provide for more timely population data as well as the detailed demographic, economic, and housing data collected every 10 years by the Decennial Census long form. The program would integrate a new sample survey, existing surveys, administrative records, and statistical modeling. After a thorough analysis of alternatives based on this methodology, the Census Bureau developed a plan similar to the current ACS to replace the 2000 Census long form. Initial $2.6 million funding for the CM program was included in the 2000 Decennial Census budget for fiscal year 1995. These funds were to develop, test, and evaluate a CM program to replace the Decennial Census long form and to provide more timely long-form type data. In the program description in the budget documents, the Census Bureau reported that it planned to develop the new program that would integrate a new sample survey, existing surveys, administrative records, and statistical modeling. Table 2 shows that about $330 million has been provided to fund the CM program since 1995, with funding provided separately until 2003 and additional funding from both the 2000 and 2010 Decennial Census programs. Beginning with 2003, all funding has been provided as part of the 2010 Census program. The Census Bureau requested $165 million for fiscal year 2005. In 1996 and 1997, funding was provided to field-test what became the ACS, to replace the 2000 Census long form. The ACS was to begin in 1999 with an annual sample of 4.8 million housing units for 1999, 2000, and 2001 and 3 million housing units for subsequent years. Under this plan, a 3-year average of ACS data for 1999–2001 was to replace the 2000 Census long form. It would provide the same detailed items and same level of geographic detail as the traditional long form with about the same quality. Annual ACS data would subsequently be provided for geographic areas with populations of 65,000 or more, 3-year averages would provide ACS data for geographic areas with populations larger than 20,000, and 5-year averages would provide ACS data for small geographic areas, such as census tracts, small towns, and rural areas. The 5-year average for 2010, 2020, and beyond would replace future Decennial Census long forms. In the 1998 budget request, the Census Bureau shifted the timing for replacing the long form from the 2000 Census to the 2010 Census. As a result, it was funded to conduct annual supplementary surveys of 750,000 households beginning with 2000, in addition to the ACS testing at four test sites (or counties). The Census 2000 Supplementary Survey, known as C2SS, and the surveys for subsequent years were to be used to test the feasibility of collecting long-form data at the same time as, but in a separate process from, the Decennial Census. Data from C2SS and the supplementary surveys were also to be used to test ACS data usability and reliability and to evaluate operational and programmatic issues associated with implementing the ACS. Also, the number of test sites was increased to 31 by 1999. Funding to compare and evaluate differences between data collected from the 2000 Census long form and the ACS testing programs began in 1999, to develop data to expand coverage to group quarters and Puerto Rico in 2001. Plans to integrate existing surveys, administrative records, and statistical modeling into the new program were dropped in 2001. The 1998 budget request also reported that the Census Bureau would proceed with plans to replace the 2010 Census long form with an ACS based on an annual sample of 3 million housing units, as with the previous plan. Unlike that plan, the sample size for 2009–11 would not increase to provide 3-year averages for 2010. This revised plan called for full implementation of the ACS in 2003. Full ACS data for 2003 to 2007 would have made 5-year averages available in 2008, 4 years before the long-form sample statistics from the 2010 Census would become available. However, budget decisions by the Congress delayed full implementation until the fourth quarter of fiscal year 2004. The Congress initially provided funds for testing the CM methodology in 1994. As we have noted, the Census Bureau had begun formal testing of the CM program in 1996 with an operational test of the ACS in four counties; this test was expanded to 31 test sites by 1999. A second testing program, the Supplementary Survey program, began in 2000 as a part of the 2000 Decennial Census. The Census Bureau designed C2SS to test the feasibility of collecting long-form data at the same time as, but in a separate process from, the 2000 Decennial Census. Data from C2SS and the same supplementary surveys, beginning with 2001, were also to be used to test ACS data usability and reliability. According to the Census Bureau, these surveys were to be used to examine technical, statistical, and operational issues associated with implementing the ACS and to document the key results in a series of reports. Before field testing began, the Census Bureau had conducted an extensive research program to identify the issues related to using the CM methodology and to replacing the long form. The research program resulted in a series of 20 reports, known as the Continuous Measurement Series, between 1992 and 1995. These reports, most of which were prepared by Charles Alexander, addressed a wide range of topics such as replacing the 2000 Census long form, collecting intercensal population data, and integrating the ACS with existing household surveys. The reports on replacing the long form identified the key issues that needed testing, and they served as the primary input to the Census Bureau’s ACS test program. These issues included those subsequently tested by the Census Bureau as well as the unresolved issues we identify in this report. Following the CM reports, Census Bureau staff presented papers from 1995 through 2001 on ACS testing at various professional association and similar meetings, as well as at a 1998 symposium on the ACS sponsored by the Census Bureau. For example, the 1995 paper by Love, Dalzell, and Alexander discussed issues related to the evaluation of the 1996 test site results, expressing concern about population controls and residence rules as well as the need for consultation with users. They reported that the Census Bureau was planning to conduct research using data from the 1996 test sites to produce controls at the census tract and block group level. They also noted that the Census Bureau would need to conduct research on the residence rule. Alexander and Wetrogan also discussed the issue of population controls in their 2000 paper. They reviewed possible methods for using ICPE to develop controls for the ACS and discussed using ACS estimates on the foreign-born U.S. population to improve the Census Bureau’s foreign- migration component of the intercensal estimates. (They reported that this effort would be part of what the Census Bureau had previously referred to as the Program of Integrated Estimates.) They also noted the need to consult with users on how to present information on the differences in ACS controls and ICPE in ACS publications. Several papers have focused on the key role of evaluating differences among the ACS test data, census long-form data, and CPS data. Alexander, Dahl, and Weidman reported in 1997 that during the demonstration period, they would be working closely with experts familiar with specific test sites to learn about the quality of the ACS estimates. For example, they reported that the Census Bureau would be looking into sources of differences between the 1999–2001 ACS test-site average estimates and the 2000 Census long-form results and using the results of differences between the 2000–02 national sample and the 2000 long form to generate model- based estimates for small geographic areas. The authors noted that these model-based estimates, based largely on information from test sites, would be used to interpret changes between 2000 and future ACS estimates. In another 1997 paper, Davis and Alexander reported the Census Bureau’s action plan for evaluation studies. They called for evaluating the results of all test sites and releasing the expert review of the analyses of the differences between the 1999–2001 ACS and the 2000 Census long form. The schedule called for releasing this information before beginning the implementation of the full ACS. Alexander’s 1998 paper on completed research, research in progress, and planned research included among the four items for planned research a “close study of differences between 1999–2001 ACS and 2000 long form in comparison areas.” The quality of the ACS measures of income was the subject of the paper Posey and Welniak presented at the Census Bureau’s 1998 symposium on the ACS. They compared income reported in the 1996 ACS and 1990 Decennial Census in an effort to evaluate the quality of the ACS income data. One of the adjustments they made to compare the two series was for the effect of inflation between 1990 and 1996. They noted that the results of the comparisons indicated a potential problem that may relate to the ACS inflation adjustment. (They described the calculation of the adjustment, which is based on the CPI, but did not provide a rationale for using the adjustment in the ongoing ACS data.) Alexander and two BLS staff reported in 1999 on the potential for using the ACS to improve labor force data from the CPS for state and smaller geographic levels. They stressed that to develop procedures for making these improvements, much research would be needed to evaluate differences between the ACS and CPS. The last research paper in this period was Alexander’s 2001 paper focusing on the origins of the CM methodology and its developers. He discussed the ACS in the context of the methodology, noting several important differences related to the nature of the ACS. He included a review of the Census Bureau’s testing and evaluation program, noting that the ACS test- site program had been expanded and that national sample supplementary surveys had been added. He said that these test data would be compared with the 2000 Census long-form data and that in 2001 and 2002, the Supplementary Survey would be used as part of the transition to the ACS. He also pointed to unresolved issues relating to the residence rule and the multiyear averages, because they would provide users with multiple estimates for geographic areas with populations larger than 20,000. Between 2001 and 2003, the Census Bureau has issued three official reports and one internal report on the status of the ACS testing and development program. In Demonstrating Operational Feasibility, published in July 2001, the Census Bureau gave a brief history of the ACS development program, which by 2001 was focused on preparing for full implementation in 2003 (although the Census Bureau later revised this to 2004) but on its operational feasibility, using data from C2SS. On the basis of the Census Bureau’s analysis of the results of its tests of operation feasibility, it reported the tests a success. However, it recognized that more evaluation on measures of data quality was necessary, as well as on differences between ACS and 2000 Census long- form data. The Census Bureau announced that over the next 2 years it would issue reports comparing data from the 2000 Census long form at the national, state, and smaller geographic areas with data from the C2SS and the ACS development program. Demonstrating Survey Quality, published in May 2002, focused on measures of C2SS survey quality, summarizing sampling and nonsampling error levels in both C2SS and the 31 ACS test sites. The Census Bureau used available, generally accepted measures of quality. On the basis of its analysis of the results of these quality tests, the Census Bureau reported the tests a success. This conclusion rested on test results that showed the C2SS program capable of providing reliable long-form data. As in the July 2001 report, the Census Bureau recognized that more evaluation was necessary on measures of data quality as well as on differences between ACS and 2000 Census long-form data and the detailed estimates produced from C2SS. The Census Bureau repeated its commitment that over the next year and a half, it would release other reports to (1) analyze in detail basic demographic characteristics (relationship, race, tenure) produced from the C2SS at the national and state levels, including comparisons between C2SS and Census 2000; (2) describe the data release plan and products for the ACS and the usability and accessibility of estimates resulting from ACS methods; and (3) give several detailed analyses of selected social, economic, and housing characteristics (education, income, commuting patterns), including comparisons between C2SS and Census 2000 at the national and some subnational levels. In June 2002, shortly after Demonstrating Survey Quality was released, a team of Census Bureau specialists who had been working on the ACS for several years prepared an internal report on testing. They presented a revised program development plan and identified key questions to be answered in testing the adequacy of the ACS in replacing the Decennial Census long form. Their plan included the preparation of a series of nine evaluation reports over 2 years. The reports that evaluated differences between the 2000 Census short-form data (100 percent reported) and corresponding C2SS items were included in Demonstrating Survey Quality. Three reports to be completed between October 2002 and January 2003 would evaluate differences between the detailed housing, social, and economic characteristics between C2SS and the 2000 Census long form, as described in Demonstrating Survey Quality. (Although this schedule was later extended to the end of 2003, these three reports still had not been released when we prepared our final draft of this report.) Finally, the team’s plan included a report that would focus on the comparisons of 3-year averages for the basic demographic, housing, social, and economic characteristics from the C2SS and ACS test sites and comparable estimates in the 2000 Census long form. The last report in the plan would compare data for 2001 and 2002 with measures shown in Demonstrating Operational Feasibility. The plan did not provide completion dates for these reports. American Community Survey Operations Plan, Release 1, published in March 2003, identified research projects to be completed in preparation for full implementation of the ACS. Two projects were on “weighting and estimation,” which covered the methodology for using independent population and housing controls, and on “program of integrated estimates,” which covered the calculation of these controls from the Census Bureau’s intercensal population estimates program. The operations plan also reported on the schedule for completing several comparison and evaluation projects with ACS and 2000 Census long-form data discussed in Demonstrating Survey Quality. It discussed the need to evaluate multiyear estimates from the supplementary surveys to demonstrate the usability, reliability, and stability of ACS estimates over time, and it stated that a report comparing 3-year ACS data with data from the 2000 Census long form would be released in mid-2003. The Census Bureau reported that the results of these research projects would not be available in 2004. Instead, it said, it would use interim procedures, taking “extensive long-term investigation and experimentation” to develop final procedures. For the ACS weighting and estimation project, the Census Bureau reported that it would be using an interim adjustment to adjust the intercensal population and housing characteristics estimates to the ACS residence concept. The Census Bureau reported that ACS estimates of occupied housing units, households, and householders should agree at all geographic levels. For the program of integrated estimates project, the operations plan discussed the need for more research to introduce improvements to the estimates from ICPE. (The ACS estimates are weighted to a population benchmark, either the most recent Decennial Census results or the most recent ICPE estimates.) The Census Bureau reported that because the accuracy of the intercensal estimates is important to overall ACS accuracy, it is important to use ACS data wherever appropriate to improve the intercensal estimates. The plan for the program on integrated estimates will use information from the 2000 Census, more current ACS distributions of population characteristics, and administrative records to produce improved population and housing unit estimates for all areas, including small areas. The plan also discussed improving housing characteristics by incorporating ACS distributions of local area vacancy rates and household characteristics into statistical models to better estimate subcounty populations. No time schedule for completing the research was provided. Finally, the March 2003 American Community Survey Operations Plan, Release 1 discussed a plan in the ACS to cover group quarters. Persons living in group quarters live in places that the Census Bureau does not classify as housing units—for example, nursing homes, prisons, college dormitories, military barracks, institutions for juveniles, and emergency and transitional shelters for the homeless. Such residences accounted for roughly 2.8 percent of the population in 2000. Although data on group quarters were collected at the test sites beginning with 1999, data were not collected in C2SS or subsequent supplementary surveys. The operations plan discussed the use of an updated Census 2000 Special Places file for the sampling frame for the full ACS. In this case, the plan noted, training field representatives on collecting data from this population is to begin in October 2004, so that full data collection production can begin in January 2005. Census Bureau staff made a presentation on comparison and evaluation reports at the April 2003 meetings of the Census Advisory Committee. The paper’s author reported that work was under way on the comparison reports noted in the March 2003 operations plan, and she described the methodology to be used to evaluate differences between the 2000 long form and C2SS. She also reported that the results of the comparisons would be used to identify how the ACS should be improved but that additional research would be needed to address consistency over time between the 2000 Census and the full ACS. She stressed the importance of evaluating consistency “in educating users on the transition from the decennial census sample estimates to the ACS estimates.” With regard to the comparison report of selected demographic, housing, social, and economic characteristics of 3-year estimates from the ACS test sites to the 2000 Census, the Census Bureau let four contracts with local experts to conduct comparisons of 3-year averages of ACS data for 1999–2001 for selected test sites with selected 2000 Census long-form data as well as 2000 Census population and housing unit characteristics. The comparisons, prepared at the county and census tract levels, would be made for measures of data quality (self-response rates, sample unit nonresponse rates, item nonresponse rates, and sample completeness ratios), as well as for data levels (counts, percentages, means, and medians) for demographic, social, economic, and housing characteristics. In summer 2003, Census Bureau staff presented a number of research papers on the ACS at the annual Joint Statistical Meetings. Papers evaluated differences between long-form and C2SS data items, such as persons with disabilities, educational attainments, and income. Most of the papers that provided comparisons with long-form data indicated whether differences were statistically significant for every comparison. Comparisons were presented at a variety of geographic levels (national, state, and test site levels). Some papers cited operational differences as possible explanatory factors, but information was not presented using a standard set of factors. The Census Bureau published ACS-2010 Census Consistency Review Plan, an internal document, at the beginning of October 2003. Its purpose was to identify methods for major operations used in the ACS and for the 2010 Census that were likely to lead to inconsistent results and to recommend ways to address these inconsistencies. Papers prepared on these operations were to discuss how an issue might result in inconsistencies between the ACS and 2010 Census results and to set forth options for dealing with consistency issues, including a research process. The plan identified residence rules and group quarters as two topics. It did not discuss completing the work in time to incorporate it into the full ACS in the next several years. Also in October 2003, the Census Bureau made two public announcements related to the ACS development plan at the Census Advisory Committee meetings. Two papers related directly to projects described in American Community Survey Operations Plan, Release 1. In “Enhancing the Intercensal Population Estimates Program with ACS Data: Summary of Research Projects,” Weidman and Wetrogan reported on research to improve the intercensal estimates by using ACS data for two “high priority” areas—international migration and internal migration. This work was being conducted within the Program of Integrated Estimates. The second paper described options for determining population control weights for ACS implementation in fall 2004 but did not indicate that research was under way to determine the effect of the options. Another source of information related to ACS development was the various reports prepared as part of the Census 2000 Testing, Experimentation, and Evaluation Program. Schneider’s January 2004 report compared employment, income, and poverty estimates from the 2000 Census long form and the CPS. From this comparison, the author concluded that this work should be continued in an effort to use the results of the comparisons to improve consistency between data collected in the CPS and data in the ACS; the ACS uses the same questions as the 2000 long form. The author also identified for additional research long-form questions that performed badly, based on a reinterview survey. From May to July 2004, the Census Bureau released seven ACS evaluation reports. Four reports compared data from the 2000 Census long form and the C2SS at the national level. Two reports compared these long-form data with 1999-2001 data from the ACS test sites for selected counties and one of these compared these data at the tract level. The other report reviewed operational data from the 2001 and 2002 supplementary surveys. In most of the reports comparing long-form and ACS data, the Census Bureau identified additional work that was needed to improve the quality of the ACS estimates or to help explain differences between the two sets of data for 2000. As noted earlier, these comparisons were limited to the national level. (The seven new reports are listed in the bibliography.) According to the Census Bureau’s plans, the calculation of independent controls for population characteristics (age, sex, race, and ethnicity) and housing characteristics for the full ACS will require a significantly different methodology from that used for the ACS supplementary surveys. Controls will be needed at the same level of geographic area detail as those that were used for the 2000 Census long form and will need to reflect the new concepts of residence and reference period underlying the ACS. For the annual ACS supplementary surveys, these characteristics were used from ICPE as the independent controls. ICPE uses Decennial Census short-form data as benchmarks and administrative record data to interpolate between and extrapolate from the census benchmarks. The program provides “official” annual estimates of population and housing characteristics at the county level, and for some subcounty levels, as of July 1 of each year, using the usual residence concept for seasonal residents. The program also provides annual estimates of total population and housing units for all areas of general-purpose government, such as cities, villages, towns, and townships. Table 3 shows information on the calculation of the independent controls used for the 2000 Census long form, the ACS supplementary series, and the fully implemented ACS through 2012. Using ICPE for the ACS supplementary surveys, the Census Bureau prepared controls for counties, or combinations of counties. As shown in table 3, for the residence concept, controls from the 2000 Census and ICPE, which were based on the usual residence concept, were used. The reference period for the ACS test program for all years except 2000 was for July 1; for 2000, it was for April 1. (Controls for the 2000 Census long form also were for April 1.) For the full ACS, the Census Bureau will use controls based on the current residence concept. According to the Census Bureau, the current residence concept recognizes that the place of residence does not have to be the same throughout a year, so that the current residence concept allows the ACS data to more closely reflect the actual characteristics of each area. The Census Bureau will use the current residence concept because the ACS is conducted every month and produces annual averages rather than point-in-time estimates, as the Decennial Census does. Also, because the ACS data are collected monthly, it will be necessary to use independent controls that define the reference period as the average for the year using a July 1 reference period. To produce ACS estimates for the full sample, the Census Bureau will need new methodologies for calculating independent controls. For the first annual estimates, for 2005, a methodology will be needed to provide ACS-defined controls for all places with population of 65,000 or more, including those for which intercensal population estimates are not available. For the 2005–07 estimates, which will be used to calculate the first multiyear averages, a methodology for controls for geographic areas with populations between 20,000 and 65,000 will be needed. For the 2008–12 estimates, a methodology for controls down to the geographic levels used for the 2000 Census long form will be needed. Finally, when the population and housing characteristics data from the 2010 Census short form become available and are incorporated into the ICPE estimates, another new methodology will be needed to revise the ACS controls for 2010. The Census Bureau also has reported that it is not planning to revise earlier years’ ACS data for consistency with revised 2010 estimates unless the inconsistencies between the 2010 ICPE and 2010 Census characteristics were significant. Table 4 shows the differences between population estimates at the county level for 2000 using ICPE based on the 1990 Census and the corresponding data from the 2000 Census. In 2000, the population estimates for almost 20 percent of the counties differed by more than 5 percent. For counties whose population was smaller than 20,000, almost 25 percent had similar differences. Census Bureau staff had long recognized the need for new methodologies to develop independent controls for the ACS. For example, a 1995 paper by Love, Dalzell, and Alexander, discussing issues related to evaluating the 1996 test site results, expressed concern about independent controls and residence rules, as well as the need for consultation with users. In 1998, the Census Bureau sponsored a conference on the quality of ACS data for rural data users. In the final report on this conference, the Westat authors concluded that the Census Bureau needed to continue and expand its contacts with stakeholders and to conduct additional research on several issues, including independent controls. Alexander and Wetrogan also discussed this issue at the 2000 Joint Statistical Meetings when they reviewed possible methods for using ICPE estimates. They also noted the need to consult with users on how to present information on the differences in ACS controls and ICPE in ACS publications. Census Bureau staff also recognized that the new ACS would create differences between (1) ACS population and housing characteristics data and the corresponding “official” data from the Decennial Census and (2) ACS population and housing characteristics data and the “official” ICPE population estimates, which are benchmarked to Decennial Census data. They also recognized that the creation of new controls for the ACS would result in inconsistencies between ACS data and data from federal household surveys, such as the CPS, whose population and housing characteristics are also based on the Decennial Census and ICPE estimates. Such differences might hinder the use of ACS data to expand and improve small geographic area estimates based on the other surveys. (CPS provides official national estimates of labor force information, such as the unemployment rate and income estimates used to calculate the number of persons in poverty.) In March 2003, the Census Bureau announced that it did not have a final methodology and that such methodologies would not be established for several years. In March 2003 in American Community Survey Operations Plan, Release 1, the Census Bureau identified research projects to be completed in preparation for full implementation of the ACS. One of these projects, “weighting and estimation,” covered the methodology for calculating the independent controls for the ACS; a second, “program of integrated estimates,” covers the calculation of these controls from the ICPE. This plan also reported that the results of these research projects would not be available in 2004 to begin implementing them with the start of the full ACS. Instead, the Census Bureau said it would use interim procedures and that it would take “extensive long-term investigation and experimentation” to develop final procedures. For the weighting and estimation project, the Census Bureau reported that it would be using an interim adjustment to adjust the intercensal population and housing characteristics estimates to the ACS residence and reference period concepts. This project would include research to examine the need to achieve agreement between the estimates of occupied housing units, households, and householders at all geographic levels. The Census Bureau reported that work on the project to revise and simplify the weighting methodology began in early 2003, that preliminary papers documenting the revisions might be available by summer 2004, and that research would continue for several years. For the program of integrated estimates project, the operations plan discussed the need for more research to introduce improvements to the ICPE estimates using information from the 2000 Census, more current ACS distributions of population characteristics, and administrative records to produce improved population and housing unit estimates for all areas, including small geographic areas. The plan also discussed improving the housing characteristics. ACS distributions of local area vacancy rates and household characteristics can be incorporated into statistical models that use distributions of housing unit characteristics to better estimate subcounty populations. No time schedule was provided for completing the research. In October 2003, Census Bureau staff presented a paper at the Census Advisory Committee meetings that described the options being considered to convert the ICPE estimate to the current residence concept. The paper described options for determining controls for ACS implementation in fall 2004 but did not indicate that research was under way to determine the options’ effects. A second paper at the same meetings reported on research to improve the intercensal estimates by using ACS data for two “high priority” areas—international migration and internal migration. This work was being conducted as part of the Program of Integrated Estimates. “Given the newly benchmarked intercensal estimates, the following question arises regarding the use of the 2010 Census data in the ACS: Should ACS estimates continue to be controlled to 2010 Census data at the county or county group level and differences between the ACS and census population counts and characteristics allocated proportionately to the tract or block group levels? Or should ACS estimates be controlled to 2010 Census data at the tract and block group level, as would have been the case with a long form?” All the experts agreed that the ACS should be controlled to the decennial census, but several noted that they had not thought about the issue and had not heard anything from the Census Bureau on the issue. (The experts are listed in app. I.) The Census Bureau has identified operational issues with the ACS test programs, primarily from information from evaluation studies on the 2000 Decennial Census and Census Bureau staff research papers on comparisons between data collected in the ACS 2000 supplementary survey and the 2000 Decennial Census long form. These issues include problems with questionnaire design, nonresponse followup, and data capture, as well as coverage of persons living in group quarters. In January 2004, the Census Bureau released the results of a key evaluation study of 2000 Decennial Census long-form data, using a reinterview survey. The study identified problems with long-form questions, which are the same as those used for the ACS, and proposed several research efforts based on a statistical evaluation of the quality of the responses to each question. For questions identified as having significant quality problems, the study recommended research on the design of the form and placement of the questions and suggested using cognitive experts in testing revised questions. The study also recommended that the Census Bureau and BLS work on the ACS employment and unemployment questions to ensure that they would complement the BLS local area unemployment statistics program. The Census Bureau also conducted a study to evaluate the design of the ACS questions that are needed to implement the residence concept and reference period for the ACS. The study suggested that additional testing was needed for the questions about multiple residences (currently, the last set of questions in the housing section). It noted “that asking these questions on a person basis may produce different and probably better data than asking them on a household basis.” The study was limited in scope and did not assess how accurately ACS respondents assign persons associated with the household to a current residence. In the ACS, the Bureau uses “In the past 12 months . . .” whereas the Census Bureau used “In 1999 . . .” for the long form. Because the reference date is not fixed, it is important for a respondent to supply the date that the ACS questionnaire filled out. Otherwise, it cannot be determined whether there is an inconsistency in an ACS questionnaire received in late April 2004 that lists a resident aged 10 with a birthdate of April 15, 1993. Census Bureau staff also discussed operational issues in research papers, based on evaluations of comparisons between 2000 Decennial Census long-form and ACS 2000 supplementary survey data for selected items presented at the 2003 Joint Statistical Meetings. A paper on income data identified the new question on the reference period as a potential source of problems, even though an additional instruction had been added to the ACS questionnaire in 1999. The authors expressed concern that some ACS respondents may misinterpret the question on “income in the past 12 months” as a request for monthly income instead of income during the previous year. The paper also included recommendations for additional research on the effect of the data capture methods. For the 2000 long form, all data items were entered with an automated optical character recognition procedure; data from the ACS will be manually keyed. Another paper, presented at the same 2003 meetings, that evaluated differences in the data on disabled persons found large and significant differences at the national level and also recommended that new questions be tested. Additional areas were identified for further research, based on evaluations of questions such as educational enrollment, ancestry, and grandparents caring for grandchildren. These areas included specific facets of the mailout-mailback system and nonresponse followup. For example, nonresponse follow-up for the 2000 long form was conducted for all nonrespondents, but for the ACS test program and for the full ACS, nonresponse follow-up will be conducted for a sample of one-third on all nonrespondents. The Census Bureau also has discussed issues with the expansion of ACS coverage to include persons living in group quarters—for example, nursing homes, prisons, college dormitories, military barracks, institutions for juveniles, homeless shelters. In October 2002, it informed its advisory committee members of the formation of a special planning team to address issues on the definition of group quarters and duplication in the address file. From the minutes of this meeting, it appears that this team will focus on group quarters in the context of the 2010 Census short form. In the ACS March 2003 operations plan, the Census Bureau reported on a new project to cover group quarters in the full ACS. The Census Bureau reported that the special project was needed because of the special challenges of developing an updated address list; in the past, such a list had been updated only once a decade. According to the Census Bureau, tests on the new list were to be completed in time for use in the full ACS in January 2005. In addition, an internal planning document issued in October 2003 identified group quarters (and residence rules) as special problems and instructed staff to provide recommendations on the collection of data on them in January 2004. Usually, the Census Bureau tests new questions. According to recent Census Bureau decisions, those tests would have to be completed so that new questions could be incorporated into the 2008 ACS questionnaire. The Census Bureau has adjusted all dollar-denominated items from the ACS testing programs, such as incomes, housing values, rents, and housing-related expenditures, for inflation. For example, ACS data for 2001 and 2002 released in September 2003 for median household income are expressed in 2002 dollars. This practice means that when each added year of ACS data is released, all dollar-denominated items for prior years will be revised. The Census Bureau makes a similar adjustment for the annual income data collected in the CPS. Unlike the ACS, the Census Bureau releases annual CPS data without the adjustment. In addition, the annual values collected in the ACS were adjusted to the calendar year. It will be using the CPI for the annual and monthly adjustments for all geographic areas. A report prepared for HUD found problems with the adjustment, including (1) the lack of a “trending” adjustment in the calculation of annual averages, (2) the use of the adjustment for multiyear averages, (3) the adjustment for cost of living for data items other than income, and (4) the lack of the unadjusted annual data that would enable HUD to use alternative methodologies. In addition, research by Census Bureau staff questioned the adjustment for incomes when they found that it was a probable source of difference between income data from the supplementary survey and corresponding data from the CPS and the 2000 Census long form. “Making an inflation adjustment is not the same as trending. The cost of living adjustment assumes that the purchasing power measured at any point in the data collection period remains constant throughout the period. For example, assume that the cost of living rises by 3 percent a year. If a household reports an annual income of $50,000 in January, a cost of living adjustment to the end of the year would increase this income to $51,500, the amount needed in December to equal the purchasing power of $50,000 in January. A trending adjustment makes no assumption about purchasing power. It attempts to track movements in dollar income. Assume that dollar income is growing at 5 percent a year. Then a trending adjustment to the end of the year would increase the $50,000 reported in January to $52,500 in December.” “The Census Bureau plans to report income in constant dollars. Income information collected in the various months will be adjusted for inflation so that all collected income will be expressed in dollars with the same purchasing power, presumably the purchasing power of dollars in December of the survey year. For moving average tabulations, all income information will be adjusted for changes in purchasing power over the period used to calculate the moving average. In other words, income reported by a respondent in the first month of a five-year moving average will be adjusted for almost five years of inflation.” “The standard Census Bureau tables for areas over 65,000 will tabulate the rents reported by respondents over the twelve months during which data were collected. A unit reporting a contract rent of $800 in January might actually be paying $850 in December. The standard table would record this unit as having a rent of $800. The standard Census Bureau tables for areas under 20,000 will tabulate rents reported by respondents over a sixty-month period. A unit reporting a contract rent of $800 in the January of the first year might actually be paying $1,070 in December of the fifth year. The standard table would record this unit as having a rent of $800.” Such changes would not be captured with an adjustment based on the all- items CPI. “The ACS will generate income distributions comparable to those from the decennial census, but the distributions will have a feature that will complicate the use of income data from the ACS in APP measures. Whereas the decennial long form measures money income, the ACS reports average purchasing power.” The report thus recommended that HUD use the unadjusted data—data that the Census Bureau had not planned to publish—in order to make the changes needed for HUD. “If no CPI adjustment had been made to the dollars reported on either Census 2000 or C2SS/ACS, the difference between medians at the U.S. level would have been smaller than the 4.6 percent shown in Table 3 . Instead, the difference would have been 2.5 percent. Since adjustment clearly played a role in determining the size of the difference between Census 2000 and C2SS/ACS estimates, it would be worthwhile to examine the costs and benefits of adjusting C2SS/ACS incomes as well as the choice of factors used to adjust them.” The authors summarized their findings by concluding that “it is clear that we are just at the beginning stages of understanding why Census 2000 and C2SS income figures differ.” They noted that the income comparisons are most critical because these Census Bureau data are used in the calculation of the number of people in poverty. In a December 2003 research paper, Census Bureau staff examined concerns about the absence from the official poverty measures of adjustment for geographic differences in cost of living. Like the ACS, for which the Census Bureau is assuming that the cost of living is the same throughout all geographic areas, the poverty measures are based on the same assumption. The authors concluded that the use of a poverty measure that takes into account geographic differences in housing costs, would significantly change the poverty measures in many states. One of the Census Bureau’s major justifications for the ACS test programs has been its comparing data collected in these programs, and corresponding data from the 2000 Decennial Census short and long forms, to identify operational problems. Another major justification for the ACS test programs has been the use of these comparisons, and comparisons with corresponding data from the CPS, to inform users in making the transition from the 2000 long form to the ACS. “to make a transition from the Census 2000 long form to collecting long-form data throughout the decade, we will begin ACS data collection in 1,203 counties. This data collection will allow for comparison of estimates from Census 2000 with estimates from the ACS for all states, large cities, and population subgroups, and will help data users and the Census Bureau understand the differences between estimates from the ACS and the Census 2000 long form.” “These data will also contribute to a comparison with data from Census 2000 that is necessary because there are differences in methods and definitions between the census and the ACS. Moreover, decision makers will want to compare an area’s data to those from Census 2000. Comparisons using data from the operational test and from the 31 sites are essential to determine how much measured change between Census 2000 and future years of the ACS is real and how much is due to operational differences between the ACS and the census.” When the Census Bureau began in 2001 to report on full implementation of the ACS, its first report focused on the operational feasibility of conducting the ACS. Its second report in 2002 focused on differences in operational characteristics of the ACS and the census long form, such as response rates and the extent of imputations. The 2002 report stated that three reports evaluating differences between the ACS and census long form would be published at the end of 2003. The Census Bureau repeated this schedule in March 2003 when it released another official report on ACS plans. In September, we were told by one of the ACS experts that consultants had been hired to conduct evaluations for 4 of the 31 test sites. The reports on comparisons with long-form items and for the test sites were published in May, June, and July 2004. The results of these comparisons are similar to comparisons and evaluations of long-form data items previously prepared by Census Bureau staff, BLS, and GAO. “These comparisons showed large national differences for key items that did not appear to be accounted for by coverage differences between the two surveys. For example, at the national level, the largest differences were for these items: (1) for the number of housing units lacking complete plumbing facilities, with the long-form estimate 27 percent higher than the estimate from the supplementary survey, and (2) for the number of unpaid family workers, with the long-form estimate 59 percent lower. . . . We also found a great degree of variation in the state differences between the long form and the supplementary survey.” “We found that at the national and state levels, there were small differences for the unemployment rate and for the poverty rate for all individuals. In contrast, comparisons of these rates for the CPS with these two surveys showed larger differences. The national unemployment rate, according to the CPS, was 4.0 percent, compared with 5.8 percent for the long form and 5.4 percent for the supplementary survey. The national rate for individuals in poverty for the CPS was 11.3 percent, compared with 12.4 percent for the long form and 12.5 percent for the supplementary survey.” Given these results, we recommended that the Census Bureau expand the scope of evaluation studies to develop supplementary survey estimates for states and large places consistent with the 2000 long form and that it include in its evaluations comparisons of year-to-year changes for 2001 and 2002, using data from the supplementary surveys and the CPS at the national and state levels for key economic and housing items. “Relative to the CPS, the ACS consistently generates lower estimates of the labor force and employment but higher estimates of unemployment. These patterns are present in each of the years 2000, 2001, and 2002. They are repeated in nearly all state-level data as well.” He made a series of recommendations for additional research, some requiring additional information from the Census Bureau. “provided a summary of the major differences between the two income data sources, in terms of data collection, capture, and processing, and provided very preliminary assessments of the possible role these differences may have played.” The authors reported that additional work was needed to understand the differences and offered recommendations for further research. Another paper presented at the same meetings examined differences between the national estimates for people aged 5 or older with a disability—48.9 million was the 2000 Census long-form estimate, 39.7 million the C2SS estimate. The author did not determine which estimate was more reliable but did find that the wording of some questions might explain the overall difference. In addition, the author reported that more work, such as additional analysis of currently available data and testing of new questions, was needed to clearly identify the reasons for the difference. The differences in disability data were also the subject of a National Council on Disability position paper, which recommended changes to the questions on disability. “The Census Bureau should carry out more research to understand the differences between and relative quality of ACS estimates and long-form estimates, with particular attention to measurement error and error from nonresponse and imputation. The Census Bureau must work on ways to effectively communicate and articulate those findings to interested stakeholders, particularly potential end users of the data.” “The Census Bureau should make ACS data available (protecting confidentiality) to analysts in the 31 ACS test sites to facilitate the comparison of ACS and census long-form estimates as a means of assessing the quality of ACS data as a replacement for census long- form data. Again, with appropriate safeguards, the Census Bureau should release ACS data to the broader research community for evaluation purposes.” One of the major differences between the ACS and the long form it will replace is that the ACS will provide data for geographic areas with populations smaller than 65,000 in terms of multiyear averages. Because of the statistical properties of these averages and users’ unfamiliarity with them, we and many other stakeholders have identified these averages as a major challenge for users, including federal agencies. The Census Bureau has recognized the need for such guidance on the averages but has not made public plans for the topics to be discussed or when the guidance will be published. “In discussing this issue, a number of the participants thought that averages were particularly problematic for those areas in which change is irregular. For example, the question was raised as to the meaning of ‘average poverty’ over a 5-year period in which poverty rose and fell from one year to the next and, thus, the average would have no obvious meaning.” The report made similar comments with regard to such characteristics as unemployment and income. Although the conference participants had generally agreed with these concerns, the report pointed out that annually updating the 5-year averages “will provide some insight into trends, although turning points will be difficult to discern precisely, as will short- term trends.” “Although a 5-year moving average will generally provide reasonably reliable cross-section statistics for all areas, including very small communities, some care will have to be exercised in choosing time periods for which changes in population or their characteristics are measured. With 5-year averages, four-fifths of the data in a pair of neighboring years will be identical. The change being measured will then be one-fifth of the difference between the most recent year and the first year of the earlier time period. The sampling errors of the differences will thus be based on annual sample sizes, not 5-year averages, and will generally be too large to make useful inferences for small areas. The two 5-year averages that are being compared should generally be discrete and non-overlapping periods, e.g., 2003–2007 and 2008–2012. These comparisons will have about the same reliability as changes between two censuses using data collected in the Census long form.” “encourage analysts to use the same length of cumulation when comparing areas of different sizes . . . . For example, we would use one year for comparing states, but would recommend 5 years for all the counties in a table comparing large and small counties.” Alexander noted that this approach differed from that of Kish, the developer of the concept of a “rolling sample,” who would “let us use tables of counties with one-year estimates for large counties, 3-year averages for medium-sized ones, and 5-year averages for small ones.” He concluded this section of the paper by saying, “It will be interesting to see what practices data users will adopt in this regard.” “If there is little change in the population over the time covered by the average, the interpretation is about the same as that of a point-in-time estimate with the advantage that the ACS estimate is more current than the historical decennial census long-form estimate.” The paper provided examples with “naive” assumptions about how users extrapolate between censuses to show that multiyear averages “work.” By implication, under other conditions, users will need guidance on when multiyear averages can be used. The paper also did not discuss the interpretation of changes in the multiyear averages, as in the 1999 Westat conference report or multiple estimates, which Alexander had discussed in his paper for the 2001 Statistics Canada conference. In September 2002, two reports focused on issues related to the statistical properties of multiyear averages. We published a report on several aspects of the ACS, including the selection of questions and the feasibility of conducting the ACS as a voluntary survey, and HUD released a report prepared for its staff on the use of the ACS for HUD programs. “Analyze and report on differences between year-to-year changes for 2001 and 2002, using the data—from ACS special supplements and the CPS at the national and state levels—for key economic and housing characteristics, such as the unemployment and poverty rates, to determine the stability of the annual ACS data.” We also discussed the need for additional information on the characteristics of the multiyear averages to help federal agencies make the transition to the ACS. We specifically noted the need for information on the selection of ACS data for geographic areas with populations larger than 20,000 for which there will be multiple estimates. On this issue, we stated that, “In addition, we found that the ACS development program did not cover information about different ways to integrate the annual data for states and large counties and the 3- and 5-year averages for smaller counties.” For example, federal agencies that need state data can choose to use the annual data, multiyear averages of the annual data, or 3-year or 5-year ACS averages. Federal agencies that also need county data can choose to use the most recent annual data for large counties and adjust the averages of the smaller counties to agree with annual data. Alternatively, they can choose to use various combinations of multiyear averages. As many federal agencies, as well as state and local governments, will be using the ACS data for allocating funds, Census Bureau guidance would reduce the inconsistent use of the multiple estimates. HUD is a major user of Decennial Census long-form data for various program applications. Its contract with ORC Macro to review how the ACS will affect HUD programs that previously relied on the Decennial Census long form for geographic area data resulted in a report that made two points about the multiyear averages, in addition to raising the previously discussed issues on the inflation adjustment to income. One of these issues related to interpretations of changes in the multiyear averages and their stability; the other related to the availability of multiple estimates for the same area. The ORC Macro report noted that year-to-year stability is important and needs to be addressed. It warned that the “differences in the precision of estimates or year-to-year changes in estimates can create problems for HUD applications.” The report used eligibility and level of benefits as an example of what could vary because of the effect of sampling variability on these changes. ORC Macro also stated: “The ACS will report data using different reporting periods for different sized areas. Inconsistent or multiple reporting periods can create problems for HUD applications.” ACS data for many geographic areas will be available in terms of annual estimates and 3- and 5-year averages, and the annual and 3-year averages (for larger areas) will be available before estimates for smaller areas. As a result, HUD will have to choose from multiple measures for some geographic areas. The study noted that HUD might decide to (1) continue to use 2000 long-form data until 2008, when the first 5-year average data will be available for all levels of geography, or (2) use the most recently available data in all cases. ORC Macro’s report also expressed concern about the amount of annual ACS data that the Census Bureau will release for areas with populations smaller than 65,000, whose accuracy the Census Bureau has found does not meet publication standards. According to the study, the Census Bureau informed HUD that beginning in 2008, it would provide researchers and planners a “research file” containing annual ACS data for areas of all sizes, including census tracts. ORC Macro recommended that if the Census Bureau does release these data, HUD consider using these “unofficial” research file results in some of its applications. The study noted, however, that if HUD decided to use these unofficial data but other agencies decided not to use them, there would be no standardization across government programs in funding allocation where the same ACS items were used. “is a smoothed estimate; by averaging a particular time period’s data observation with those within a particular time window, the resulting estimate is meant to follow the general trend of the series but not be as extreme as any of the individual points. The ramifications of this basic concept emerge when moving average estimates are entered into sensitive allocation formulas or compared against strict eligibility cutoffs. A smoothed estimate may mask or smooth over an individual year drop in level of need, thus keeping the locality eligible for benefits; conversely, it may also mask individual-year spikes in activity and thus disqualify an area from benefits. It is clear that the use of smoothed estimates is neither uniformly advantageous nor disadvantageous to a locality; what is not clear is how often major discrepancies may occur in practice.” “It is incorrect to use annual estimates based on moving averages over several years when assessing change since some of the data are from overlapping time periods and hence identical. At the least, the results will yield incorrect estimates of the variance of the estimates of change. Therefore, users should be cautioned about this aspect of the use of moving averages.” During the past decade’s development of the ACS, the Census Bureau has had many opportunities to consult with and take account of input from stakeholders and users in making key decision on the programs. It has (1) sponsored NAS panels, (2) held user conferences, (3) hired consultants to organize two conferences, (4) met regularly with its advisory committees and other user groups, and (5) encouraged its staff to present reports at ASA meetings and meetings of similar professional organizations. In the past several years, we and other federal agencies have reported on the ACS and provided recommendations to the Census Bureau. It established the ACS Federal Agency Information Program in 2003, responding to a recommendation we had made. It also announced last year that it was looking into establishing a partnership with the Congress and its oversight entities. “Eight years later, faced with the task of offering advice on making the vision of continuous measurement a reality in the 2010 census, the similarity between the arguments then and now is uncanny. Similar, too, are the points of concern; the current panel is hard-pressed to improve upon the basic summary of concerns outlined by our predecessors. We are, however, much more sanguine that a compelling case can be made for the ACS and that it is a viable long-form replacement in the 2010 census.” The Census Bureau has neither responded to the panel’s first interim report in 2000 nor indicated that the recommendations were being adopted. The Census Bureau also has not responded to recommendations and issues raised by HUD and BLS. For example, it has not responded to HUD’s recommendations on the ACS adjustments to dollar-denominated items or to BLS’s recommendations on the ACS labor force questions. (On the issue of dollar-denominated items, we found no indication that the Census Bureau had ever consulted outside experts about either the methodology or the implementation.) Census Bureau summaries of discussion about the ACS at its Advisory Committee meetings from October 2000 to April 2003 also indicate a lack of responsiveness. During this period, committee members raised concerns about the ACS. In particular, they made recommendations about many of the issues we have discussed in this report, including the evaluations of ACS and long-form comparisons, the new residence rules, independent controls, ICPE, group quarters, and Spanish-language questionnaires. At the April 2003 meeting, ASA committee members also requested a change in the structure of the Advisory Committee meetings, asking the Census Bureau to spend less time on update sessions and more time on sessions devoted to gathering more detailed input, commentary, and recommendations on topics the Census Bureau needs and wants advice on. Although the Census Bureau has addressed issues related to ICPE and Spanish-language questionnaires, the meeting summaries do not report that it followed recommendations in other areas. Additional staff who made major contributions to this report were Heather Von Behren, Penny Pickett, Mitchell Karpman, Michael Volpe, Andrea Levine, Patricia Dalton, and Robert Goldenkoff. The first section in this bibliography lists documents on the history of the long form and mid-decade census. The remaining works are divided between numerous types of Census Bureau reports and papers, Association of Public Data Users papers, congressional hearings and testimony, and other reports and papers. Recent reports from the National Academy of Sciences are discussed in appendix II. Related GAO Products are listed in a separate section at the end of this report. Alexander, Charles H. “Still Rolling: Leslie Kish’s ‘Rolling Samples’ and the American Community Survey.” In Proceedings of Statistics Canada Symposium 2001: October 16–19. Ottawa: Statistics Canada, 2002. Anderson, Margo J., ed. Encyclopedia of the U.S. Census. Washington, D.C.: CQ Press, 2000. House of Representatives, Committee on Post Office and Civil Service, Subcommittee on Census and Population. Review of Major Alternatives for the Census in the Year 2000. Serial 102-25. Washington, D.C.: August 1, 1991. House of Representatives, Committee on Post Office and Civil Service. Census Confidentiality/Mid-Decade Sample Survey Bill. Report 93-246. Washington, D.C.: June 4, 1973. House of Representatives, Committee on Post Office and Civil Service. Mid-Decade Censuses of Population, Unemployment, and Housing. Report 780. Washington, D.C.: August 12, 1965. Salvo, Joseph, and Arun Peter Lobo. The American Community Survey: Quality of Response by Mode of Data Collection in the Bronx Test Site. Presented at 2002 Joint Statistical Meetings, New York City, August 14, 2002. American Community Survey Operations Plan, Release 1. Washington, D.C.: March 2003. Meeting 21st Century Demographic Data Needs: Implementing the American Community Survey. Report 1. Demonstrating Operational Feasibility. Washington, D.C.: July 2001. Meeting 21st Century Demographic Data Needs: Implementing the American Community Survey. Report 2. Demonstrating Survey Quality. Washington, D.C.: May 2002. Meeting 21st Century Demographic Data Needs: Implementing the American Community Survey. Report 3. Testing the Use of Voluntary Methods. Washington, D.C.: December 2003. Meeting 21st Century Demographic Data Needs: Implementing the American Community Survey. Report 4. Comparing General Demographic and Housing Characteristics With Census 2000. Washington, D.C.: May 2004. Meeting 21st Century Demographic Data Needs: Implementing the American Community Survey. Report 5. Comparing Economic Characteristics With Census 2000. Washington, D.C.: May 2004. Meeting 21st Century Demographic Data Needs: Implementing the American Community Survey. Report 6. The 2001-2002 Operational Feasibility Report of the American Community Survey. Washington, D.C.: May 2004. Meeting 21st Century Demographic Data Needs: Implementing the American Community Survey. Report 7. Comparing Quality Measures: The American Community Survey's Three-Year Averages and Census 2000's Long Form Sample Estimates. Washington, D.C.: June 2004. Meeting 21st Century Demographic Data Needs: Implementing the American Community Survey. Report 8. Comparison of the American Community Survey Three-Year Averages and the Census Sample for a Sample of Counties and Tracts. Washington, D.C.: June 2004. Meeting 21st Century Demographic Data Needs: Implementing the American Community Survey. Report 9. Comparing Social Characteristics With Census 2000. Washington, D.C.: June 2004. Meeting 21st Century Demographic Data Needs: Implementing the American Community Survey. Report 10. Comparing Housing Characteristics With Census 2000. Washington, D.C.: July 2004. The presentations in this section were made by the Census Bureau’s Decennial Census Advisory Committee, Professional Association Advisory committees, and Race and Ethnic Advisory Committees. The ACS: Data Products to Meet User Needs. Race and Ethnic Advisory Committee meeting, Washington, D.C., March 14, 2001. Alexander, Charles, Alfredo Navarro, and Deborah Griffin. Update on ACS Evaluations. Decennial Census Advisory Committee meeting, Washington, D.C., November 5, 2001. Gordon, Nancy. The American Community Survey. Joint Meeting of the Census Bureau Advisory Committees, Washington, D.C., July 28, 2000. Gordon, Nancy. The American Community Survey. Decennial Census Advisory Committee meeting, Washington, D.C., September 21–22, 2000. Gordon, Nancy. The American Community Survey. Race and Ethnic Advisory Committees meeting, Washington, D.C., November 2, 2000. Gordon, Nancy. American Community Survey Update. Decennial Census Advisory Committee meeting, Washington, D.C., May 2, 2002. Griffin, Deborah. An Overview of the Research and Evaluation Program for the American Community Survey. Decennial Census Advisory Committee meeting, Alexandria, Virginia, October 2–4, 2002. Griffin, Deborah H. Comparing Characteristics from the American Community Survey and Census 2000: Methodology. Census Advisory Committee of Professional Associations meeting, Washington, D.C., April 10–11, 2003. Navarro, Alfredo. American Community Survey: Use of Population Estimates as Controls in the ACS Weighting. Census Advisory Committee of Professional Associations meeting, Washington, D.C., October 23, 2003. Navarro, Alfredo. A Discussion of the Quality of Estimates from the American Community Survey for Small Population Groups. Census Advisory Committee of Professional Associations meeting, Washington, D.C., October 2–3, 2002. Weidman, Lynn, and Signe Wetrogan. Enhancing the Intercensal Population Estimates Program with ACS Data: Summary of Research Projects. Census Advisory Committee of Professional Associations meeting, Washington, D.C., October 23, 2003. The memorandums listed here from the 20 in the Continuous Measurement series are those most directly related to topics we review in this report. Alexander, Charles H. A Continuous Measurement Alternative for the U.S. Census. CM-10, October 28, 1993. CM-11 summarized this paper at the 1993 annual meeting of the American Statistical Association, San Francisco, California, August 10, 1993. Alexander, Charles H. Further Exploration of Issues Raised at the CNSTAT Requirements Panel Meeting. CM-13. Internal Census Bureau memorandum, Washington, D.C., January 31, 1994. Alexander, Charles H. A Prototype Continuous Measurement System for the U.S. Census of Population and Housing. CM-17. Presented at the annual meeting of the Population Association of America, Miami, Florida, May 5, 1994. Alexander, Charles H. Some Ideas for Integrating the Continuous Measurement System into the Nation’s System of Household Surveys. CM-19A. Internal Census Bureau memorandum, Washington, D.C., January 6, 1995. 2004 Census Test Operational Plan. Washington, D.C.: September 29, 2003. 2010 Census Decision Memorandum Series No. 5, Finalizing Content for the 100 Percent Items in the 2010 Census and the American Community Survey. Washington, D.C.: June 3, 2004. 2010 Census Planning Memorandum Series No. 24, Action Plan: 2010 Research and Development Planning Group on Race and Ethnic Data Collection, Tabulation, and Editing. Washington, D.C.: June 9, 2004. 2010 Census Planning Memorandum Series No. 26, Action Plan: 2010 Research and Development Planning Group on Special Places/Group Quarters Development and Testing. Washington, D.C.: March 8, 2004. ACS-2010 Consistency Review Plan. Washington, D.C.: October 1, 2003. American Community Survey Development Report Series Program Plan. Washington, D.C.: rev. June 12, 2002. Abramson, Florence. Special Place/Group Quarters Enumeration. Census 2000 Testing, Experimentation, and Evaluation Program, Topic Report No. 5. U.S. Census Bureau, Washington, D.C., February 2004. Adlakha, Arjun, J. Gregory Robinson, Kirsten West, and Antonio Bruce. Assessment of Consistency of Census Data with Demographic Benchmarks at the Subnational Level. Census 2000 Evaluation O.20. U.S. Census Bureau, Washington, D.C., August 18, 2003. Clarke, Sandra, John Iceland, Thomas Palumbo, Kirby Posey, and Mai Weismantle. Comparing Employment, Income, and Poverty: Census 2000 and the Current Population Survey. Census 2000 Auxiliary Evaluation. U.S. Census Bureau, Washington, D.C., September 2003. Palumbo, Thomas and Paul Siegel. Accuracy of Data for Employment Status as Measured by the CPS-Census 2000 Match. Census 2000 Evaluation B.7. U.S. Census Bureau, Washington, D.C., May 4, 2004. Schneider, Paula. Content and Data Quality in Census 2000. Census 2000 Testing, Experimentation, and Evaluation Program, Topic Report No. 12. U.S. Census Bureau, Washington, D.C., January 22, 2004. Bureau staff presented many ACS-related papers at the August 2003 Joint Statistical Meetings in San Francisco, California. We reviewed the papers in this section in detail because they were related to comparisons between ACS estimates and 2000 Census results. Boggess, Scott, and Nikki L. Graf. Measuring Education: A Comparison of the Decennial Census and the American Community Survey. Presented at Joint Statistical Meetings, San Francisco, California, August 7, 2003. Dye, Jane Lawler. Grandparents Living with and Providing Care for Grandchildren: A Comparison of Data from Census 2000 and 2000 American Community Survey. Presented at Joint Statistical Meetings, San Francisco, California, August 7, 2003. Love, Susan, and Deborah Griffin. A Closer Look at the Quality of Small Area Estimates from the American Community Survey. Presented at Joint Statistical Meetings, San Francisco, California, August 4, 2003. Posey, Kirby G., Edward Welniak, and Charles Nelson. Income in the American Community Survey: Comparisons to Census 2000. Presented at Joint Statistical Meetings, San Francisco, California, August 7, 2003. Raglin, David A., Theresa F. Leslie, and Deborah H. Griffin. Comparing Social Characteristics between Census 2000 and the American Community Survey. Presented at Joint Statistical Meetings, San Francisco, California, August 3, 2003. Stern, Sharon M. Counting People with Disabilities: How Survey Methodology Influences Estimates in Census 2000 and the Census 2000 Supplementary Survey. Presented at Joint Statistical Meetings, San Francisco, California, August 7, 2003. Alexander, Charles H. American Community Survey Data for Economic Analysis (October 2001). Presented at the Census Advisory Committee of the American Economic Association meeting, Washington, D.C., October 18–19, 2001. Alexander, Charles H. Recent Developments in the American Community Survey. Presented at the 1998 Joint Statistical Meetings, Dallas, Texas, August 12, 1998. Alexander, Charles H., Sharon Brown, and Hugh Knox. American Community Survey Data for Economic Analysis (December 2001). Presented at the Federal Economic Statistics Advisory Committee meeting, Washington, D.C., December 14, 2001. Alexander, Charles H., Scot Dahl, and Lynn Weidman. Making Estimates from the American Community Survey. Presented at the 1997 Joint Statistical Meetings, Anaheim, California, August 13, 1997. Alexander, Charles H., and Signe Wetrogan. Integrating the American Community Survey and the Intercensal Demographic Estimates Program. Presented at the 2000 Joint Statistical Meetings, Indianapolis, Indiana, August 14, 2000. Butani, Shail, Charles Alexander, and James Esposito. Using the American Community Survey to Enhance the Current Population Survey: Opportunities and Issues. Presented at the 1999 Federal Committee on Statistical Methodology Research Conference, Arlington, Virginia, November 15-17, 1999. Davis, Mary Ellen, and Charles H. Alexander, Jr. The American Community Survey: The Census Bureau's Plan to Provide Timely 21st Century Data. Missouri Library World, Spring 1997. DeMaio, Theresa J., and Kristen A. Hughes. Report of Cognitive Research on the Residence Rules and Seasonality Questions on the American Community Survey. U.S. Bureau of the Census, Statistical Research Division, Washington, D.C., July 2003. Love, Susan, Donald Dalzell, and Charles Alexander. Constructing a Major Survey: Operational Plans and Issues For Continuous Measurement. Presented at the 1995 Joint Statistical Meetings, Orlando, Florida, August 16, 1995. Nelson, Charles, and Kathleen Short. The Distributional Implications of Geographic Adjustment of Poverty Thresholds. U.S. Bureau of the Census, Housing and Household Economics Statistics Division, Washington, D.C., December 2003. Posey, Kirby G., and Edward Welniak. Income in the ACS: Comparisons to the 1990 Census. Presented at the American Community Survey Symposium, Suitland, Maryland, March 1998. Salvo, Joseph, and Arun Peter Lobo. The American Community Survey: Quality of Response by Mode of Data Collection in the Bronx Test Site. Presented at the 2002 Joint Statistical Meetings, New York, August 14, 2002. Smith, Amy Symens. The American Community Survey and Intercensal Population Estimates: Where Are the Crossroads? Technical Working Paper 31, U.S. Census Bureau, Population Division, Washington, D.C., December 1998. Davis, Mary Ellen. The American Community Survey Data Products, Alexandria, Virginia, October 20, 2003. Gage, Linda, State of California, Department of Finance. American Community Survey: Research by the Data User Community. Alexandria, Va.: October 20, 2003. Petroni, Rita. How Do 3-Year Averages from the ACS Compare to Census 2000 Data? (Preliminary Results). Alexandria, Va.: October 20, 2003. Salvo, Joseph, City of New York, Planning Department. American Community Survey: Research by the Data User Community. Alexandria, Va.: October 20, 2003. Scarr, Harry A. Deputy Director, Census Bureau. Continuous Measurement. Association of Public Data Users, Washington, D.C.; October 16, 1994. Barron, William Jr., Acting Director, U.S. Bureau of the Census, before the U.S. House of Representatives, Committee on Government Reform, Subcommittee on the Census. The Census Bureau’s Proposed American Community Survey (ACS), Serial 107-9. Washington, D.C.: June 13, 2001. Kincannon, Charles Louis, Director, U.S. Bureau of the Census, before the U.S. House of Representatives, Subcommittee on Technology, Information Policy, Intergovernmental Relations, and the Census. The American Community Survey: The Challenges of Eliminating the Long Form from the 2010 Census, Serial 108-97. Washington, D.C.: May 13, 2003. Prewitt, Kenneth, Director, U.S. Bureau of the Census, before the U.S. House of Representatives, Committee on Government Reform, Subcommittee on the Census. House Hearing on ACS July 20, 2000. The American Community Survey: A Replacement for the Census Long Form? Serial 106-246. Washington, D.C.: July 20, 2000. Kalton, Graham, and others. The American Community Survey: The Quality of Rural Data, Report of a Conference. Rockville, Md.: Westat, June 29, 1998. Nardone, Thomas, and others. Examining the Discrepancy in Employment Growth between the CPS and the CES. Washington, D.C.: FESAC, October 17, 2003. National Council on Disability. Improving Federal Disability Data. Washington, D.C.: January 9, 2004. ORC Macro. The American Community Survey: Challenges and Opportunities for HUD. Calverton, Md.: September 27, 2002. Vroman, Wayne. Comparing Labor Market Indicators from the CPS and ACS. Washington, D.C.: Urban Institute, September 2003. Westat Inc. The American Community Survey: A Report on the Use of Multi-Year Averages. Rockville, Md.: April 30, 1999. 2010 Census: Cost and Design Issues Need to Be Addressed Soon. GAO-04-37. Washington, D.C.: January 15, 2004. Medicaid Formula: Differences in Funding Ability among States Often Are Widened. GAO-03-620. Washington, D.C.: July 10, 2003. Formula Grants: 2000 Census Redistributes Federal Funding Among States. GAO-03-178. Washington, D.C.: February 24, 2003. Major Management Challenges and Program Risks: Department of Commerce. GAO-03-97. Washington, D.C.: January 1, 2003. The American Community Survey: Accuracy and Timeliness Issues. GAO-02-956R. Washington, D.C.: September 30, 2002. Legal Authority for American Community Survey. B-289852. Washington, D.C.: April 4, 2002. Medicaid Formula: Effects of Proposed Formula on Federal Shares of State Spending. GAO/HEHS-99-29R. Washington, D.C.: February 19, 1999. Decennial Census: Overview of Historical Census Issues. GAO/GGD-98-103. Washington, D.C.: May 1, 1998. Poverty Measurement: Adjusting for Geographic Cost-of-Living Difference. GAO/GGD-95-64. Washington, D.C.: March 9, 1995. Status of the Statistical Community after Sustaining Budget Reductions. GAO/IMTEC-84-17. Washington, D.C.: July 18, 1984.
The Congress asked GAO to review operational and programmatic aspects of the Census Bureau's ACS that will affect the reliability of small geographic area data. The ACS will be a mail survey of about 3 million households annually, whose results will be cumulated over 5 years to produce estimates that will replace information previously provided by the Decennial Census long form. In addition, annual data will be published for geographic areas with 65,000+ populations and as 3-year averages for areas with populations of 20,000 to 65,000. Annual data will be published beginning in 2006 with data for 2005. The 5-year averages for 2008-12 will provide data for small geographic areas. The Census Bureau's development of the American Community Survey goes back several decades and has included intensive research and field testing programs, as well as substantial outreach efforts, in particular through the reports and workshops at the National Academy of Sciences (NAS). However, if the ACS is to be an adequate replacement for the Decennial Census long form as the major source of data on small geographic areas and if it is to provide similar annual data for larger areas, the Census Bureau will need to incorporate in a timely manner the resolution of issues it has already identified in the ACS testing and 2000 Decennial Census evaluation programs, such as the residence concept, group quarters, and questions on disability; complete the ACS testing plan as originally planned, such as the comparison and evaluation of long form-ACS supplementary survey data at the state level, to identify other unresolved issues and to provide information for users of 2000 Decennial Census long-form data that will be necessary for the transition to the full ACS; evaluate and consult with stakeholders and users on the resolution of issues identified in this report, such as the methodology for deriving population and housing controls, guidance for users on the impact of the characteristics of multiyear averages for small geographic areas, and the presentation of dollar-denominated values; coordinate the results of the testing program for the 2010 Decennial Census short form with the ACS implementation schedule; and resolve all issues so that the ACS estimates beginning with 2008 are consistent with the ACS estimates for 2009-12 and with the 2010 Census short form. Although the Census Bureau has solicited advice from external stakeholders and users and has supported research by its own staff on most of the issues identified in this report, there is no indication that the Census Bureau has yet followed this advice or implemented plans for consultation on resolving these issues. In addition, it has been more than a year since the Census Bureau announced that it was looking into establishing an ACS partnership program that would involve advisory groups and expert panels to improve the program, but no such program has been established. Another issue related to the proposed ACS is how the Census Bureau might provide more timely and reliable small geographic area data. This goal could be accomplished, but it would require additional funding. The most direct approach would be to increase the sample size for 2009-11. This increase would enable the Bureau to provide small geographic area data that would be the replacement for the 2010 Census long form 1 year earlier.
The Office of Personnel Management (OPM) has identified two different kinds of furloughs—an administrative furlough, which is a planned event by an agency designed to absorb reductions necessitated by downsizing, reduced funding, lack of work, or other budget situation other than a lapse in appropriations, and a shutdown furlough, which results from a lapse in appropriations. DOD had not implemented a department-wide administrative furlough prior to 2013, according to officials within the Office of the Under Secretary of Defense for Personnel and Readiness; however, DOD has implemented shutdown furloughs—once in 1995 and again in 2013. Since 1980, DOD has conducted furloughs based on gaps in appropriation during November 14–17, 1995, and during October 1–17, 2013. DOD’s total workforce has grown since the events of September 2001. The civilian workforce has grown from about 687,000 full-time equivalents in fiscal year 2001 to about 782,000 full-time equivalents projected in fiscal year 2015. DOD’s active and reserve military workforce grew between fiscal year 2001 and fiscal year 2011 from about 2.25 million to about 2.27 million, with a budgeted request for a military workforce of about 2.13 million for fiscal year 2015. Further, DOD has increasingly relied on contracted support both overseas and in the United States to perform many of the same functions as civilian employees, including management support, communication services, and intelligence. DOD’s total obligations for contracted services grew from about $96 billion in fiscal year 2001 to about $174 billion for an estimated contracted services workforce of about 670,000 full-time equivalents in fiscal year 2012. However, with the drawdown in operations in Iraq and Afghanistan, as well as changing priorities and missions, most military services project a decrease in their military and civilian workforce through fiscal year 2017. For over a decade, strategic human capital management for all federal civilians—including those at DOD—has been on our High-Risk list because of the long-standing lack of leadership in this area. We have conducted assessments of DOD’s strategic workforce plans since 2008, and our body of work has found that DOD’s efforts to address mandated strategic workforce planning requirements have been mixed. In our most recent report in September 2012 on the department’s overall civilian strategic workforce plan, we recommended that DOD take a number of actions, including to provide guidance for developing future strategic workforce plans that clearly directs the functional communities information that identifies not only the number or percentage of personnel in its military, civilian, and contractor workforces but also the capabilities of the appropriate mix of those three workforces. DOD either concurred or partially concurred with our recommendations, stating that, among other things, the department was deliberate in applying lessons learned from previous workforce plans and identifying specific challenges and the actions being taken to address those challenges to meet statutory planning requirements by 2015. DOD defines a functional community as employees who perform similar functions; functional communities are discussed further in the background section of this report. rest of the federal civilian workforce. DOD was also affected by a Continuing Resolution that held funding at fiscal year 2012 levels through March 27, 2013, even though DOD had requested funding increases in most areas of operations for fiscal year 2013. In addition, in January 2013, DOD reduced its spending to prepare for a potential sequestration, a process of automatic, largely across-the-board spending reductions under which budgetary resources are permanently canceled to enforce certain budget policy goals. DOD took several actions to prepare for a potential sequestration, such as authorizing components in January 2013 to initiate a hiring freeze as needed, releasing term and temporary employees, and instructing components to draft plans to include the possibility of furloughs of up to 22 workdays. On February 20, 2013, DOD provided Congress with notice of its intent to furlough. We reported in November 2013 that DOD’s efforts to address sequestration—a reduction of $37 billion in DOD’s discretionary budget— was a short-term response focused on addressing the immediate funding reductions for fiscal year 2013. As a result of sequestration and increased Overseas Contingency Operations requirements, on March 13, 2013, DOD issued guidance for components to plan for a furlough of its civilian personnel for up to 22 workdays. On March 28, 2013, DOD reduced the number of planned furlough days from 22 to 14, in response to the enactment of a defense appropriations act for the remainder of fiscal year 2013. The Secretary of Defense also decided to apply the furlough across the department to allow for a reallocation of resources throughout the department to address national security priorities. DOD also took other actions across the department to reduce its budget in response to the sequestration, such as curtailing training for certain units and postponing planned maintenance. The size and complexity of DOD’s worldwide operations—involving a requested base budget of approximately $495.6 billion in fiscal year 2015—and the need to reduce its budget in an ongoing fiscally constrained environment, require that DOD have accurate, complete, and timely financial information available to make management decisions. We have placed DOD on our High-Risk List for financial management beginning in 1995 because of financial management weaknesses that affect its ability to control costs; ensure accountability; anticipate future costs and claims on the budget; detect fraud, waste, and abuse; and prepare auditable financial statements. DOD is one of the few federal entities that cannot accurately account for its spending or assets. We have reported that while DOD has made efforts to improve financial management, it still has much work to do if it is to meet its long-term goals of improving financial management and achieving full financial statement auditability. On May 14, 2013, the Secretary of Defense, in an effort to minimize adverse affects on military readiness, issued a memorandum that directed a furlough of most of its civilian personnel in response to major budgetary shortfalls from the sequestration. The memorandum required most civilians to be furloughed for up to 11 days beginning on July 8, 2013, typically for 1 day per week until September 30, 2013. The Secretary of Defense also directed all components to monitor funding closely for the remainder of fiscal year 2013 so that, if the budget situation permitted, DOD could shorten the length of the furloughs. The memorandum listed categories of exceptions to the furlough, including personnel assigned to a combat zone, those necessary to protect the safety of life and property, and Navy shipyard employees. See appendix II for a complete list of exceptions granted. Additionally, the Secretary of Defense’s May 14, 2013, memorandum included an associated schedule for issuance of furlough proposal notices at least 30 days in advance of the furlough, allowing at least 7 days for response by the employee. The memorandum also identified the following key dates: May 28–June 5: Furlough proposal notices were to be served to individual employees subject to furlough. June 4–June 12: Individual employee reply periods—time allotted for employees to acknowledge receipt of the furlough proposal notice, among other things—ended 7 calendar days from when the proposal was received, unless component procedures allowed for a different reply period. June 5–July 5: Furlough decision letters were to be served to individual employees subject to furloughs, depending on when the proposal was received and prior to the first day of furlough. July 8: Furlough period was to begin no earlier than this date. An attachment to the memorandum noted that defense agencies and military services should designate a Deciding Official who would be accountable for making final decisions on furloughs for individual employees after carefully considering the employee’s reply, if any, and the needs of the department. The Assistant Secretary of Defense for Readiness and Force Management issued clarifying guidance throughout the planning and implementation of the furlough that, among other things, provided standard templates for the proposal and decision notice letters to prepare and issue to civilian employees.provide clarification on the use of leave without pay during the time of the In addition, guidance was issued to furlough; to help ensure that borrowed military personnel were not used to compensate for work resulting from the furlough; and to prohibit contracted support from being assigned or permitted to perform additional work or duties to compensate for workload or productivity loss resulting from the furlough. Based on the Secretary of Defense’s May 14, 2013, memorandum, managers were given the authority to develop specific furlough procedures in order to minimize adverse mission effects and limit the harm to morale and productivity. The memorandum also noted that bargaining with unions may be required. As a result, military departments developed implementing guidance based on the Secretary of Defense’s memorandum requiring the furlough. For example, the Army issued a memorandum on command reporting requirements for the furlough to capture information on the issuance of furlough proposal notices and decision letters. Also, the Navy issued supplemental guidance on the scheduling of furloughs that included details on commander authorities to make decisions on the scheduling of furlough days for each employee, subject to union negotiation, as appropriate. In addition, the Air Force excluded from furlough those civilian employees whose homes were destroyed or rendered uninhabitable by the Oklahoma tornadoes in 2013. On August 6, 2013, the Secretary of Defense issued a memorandum reducing the number of furlough days from 11 to 6 days for most civilians. This action also cancelled furloughs for Department of Defense Education Activity instructional and support staff on 10-month contracts, and required new hires whose furlough period began after July 8, 2013, to take 2 furlough days per pay period between their furlough start date and August 17, 2013. As discussed in greater detail later in the report, the department was able to reduce the number of furlough days after completing several transfer and reprogramming actions, which gave the department additional flexibility and resulted in substantial realignment of funds—about $8.6 billion. Additional guidance was issued after the reduction in furlough days to address those who took more than 6 days of furlough, by allowing them to substitute any excess furlough days for leave. In the event an employee did not have sufficient leave accrued, or the employee elected not to substitute leave, excess furlough time remained as unpaid time. Ultimately, DOD reported that it furloughed 624,404 civilians and excepted 142,602 from furlough. Specifically, of the DOD civilians furloughed, the Army furloughed about 221,000; the Navy furloughed about 153,000; the Air Force furloughed about 157,000; and the other DOD agencies furloughed about 93,000 (see table 1 below). Based on the Secretary of Defense’s May 14, 2013, memorandum initiating the furlough, managers carried out the planning and implementation of the furlough for their respective offices. Specifically, in addition to the Office of the Secretary of Defense’s efforts to notify its 10 unions with national consultation rights about the decision to furlough, managers carried out negotiations with over 1,500 local bargaining units on the implementation of the furlough of civilians, which included issues such as who would be furloughed, who would be excepted from furlough, and the scheduled furlough days. Officials at Brooke Army Medical Center, Norfolk Naval Shipyard, and Air Mobility Command described the following actions they took to implement the furlough at their sites: Notification letter process: Officials at these sites described their process for designating a Deciding Official and the distribution, receipt, and tracking of furlough notification letters. For example, at Brooke Army Medical Center, the Deciding Official hand-signed each furlough notification letter for over 2,700 civilians that were then distributed to the medical departments to be handed out by the civilians’ supervisors. If the supervisor was unable to hand-deliver the notification letter, the letter was mailed to the civilian via regular and certified mail. The supervisors then followed up with the civilians who received the letters to obtain their signature or acknowledgement of receipt of the notification and provided copies to the civilians and copies to the human resources office to be placed in the official personnel files. Work schedules: Civilian personnel were assigned varying schedules for the furlough, depending on negotiations with unions and consideration of mission requirements. For example, officials at these sites said that some offices implemented Friday as civilian furlough day, while other offices spread out the furlough days of their civilians across the work week. In addition, some civilians took their furlough days in clusters rather than just 1 day a week. Tracking of furlough days: Officials at these sites explained that they monitored the timecards of civilians who were furloughed to ensure that they were taking the required number of furlough days and in order to know when the furlough would end for each civilian based on their individual schedule or circumstance. For example, officials at Air Mobility Command explained that their office of financial management generated reports on the number of furlough hours taken based on timecard reporting, and when the number of furlough days was reduced to 6 days, officials audited the timecard system to ensure civilians under their purview had taken the correct number of furlough days. Exceptions process: The exceptions determination process varied at these sites, and additional exceptions to the furlough were sought and granted as the department clarified the personnel covered under categorical exceptions and as commands granted individual exceptions. For example, officials at Brooke Army Medical Center set up a team early on to identify and prioritize department needs within the hospital to ensure they were able to meet the mission of providing adequate staff and high-quality care to patients. This allowed Brooke Army Medical Center to identify civilian personnel to except from the furlough based upon prioritized needs, such as evening shift supervisors within its nursing department. Also, the Public Works office at Norfolk Naval Shipyard requested exceptions for some of its mechanics and utilities staff to provide 24-hour support. As a result of DOD furloughing 624,404 civilians, the Office of the Comptroller reported that the department saved approximately $1 billion from the furlough. These cost savings were calculated using Defense Finance and Accounting Service–reported payroll data by summing the result of each employee’s hourly pay rate multiplied by the number of furlough hours recorded in his or her time card. Office of the Comptroller officials reported they provided DOD components with Defense Finance and Accounting Service payroll data reports for their respective civilian employees and requested that they validate the data to ensure that all employees who were required to be furloughed correctly recorded their furlough days in the timekeeping systems. The cost savings calculation did not include the last week of the fiscal year because the last pay period of the fiscal year—which ran from September 22–October 5, 2013— overlapped with the first week of fiscal year 2014 and included leave without pay recorded for the government shutdown. Office of the Comptroller officials stated that the savings amount from the final pay period in the fiscal year was expected to be minimal as the majority of the furlough savings were realized by August 24, 2013, when most furloughed civilians would have taken their required 6 furlough days. DOD’s reported cost savings of $1 billion does not account for other costs the department incurred while implementing the furlough, such as administrative costs from processing furlough notification letters or developing furlough guidance, as well as costs from the loss of productivity due to civilians being furloughed. For example, officials we interviewed at Brooke Army Medical Center stated that many hours were spent on administrative tasks to prepare for and implement the furlough. In addition, officials from Army Medical Command explained that there was a loss of productivity, as staff set aside their primary tasks to concentrate on implementing the furlough. Further, Marine Corps officials stated that they spent a majority of their time dealing with the furlough rather than focusing on day-to-day business, such as developing critical skills training. DOD developed an estimated cost-savings for the furlough to assist in planning efforts to meet sequestration cost-reduction targets; however, DOD did not exclude pay for those excepted from the furlough and did not update its estimate throughout the furlough period as more information became available, such as real-time cost savings and when subsequent decisions were made to reduce the number of furlough days. As noted earlier, the Secretary of Defense directed a furlough of most of the department’s civilian personnel in response to major budgetary shortfalls from the sequestration. The Office of the Comptroller developed an average estimated cost savings per person per furlough day of approximately $300. Officials within the Office of the Comptroller stated that the estimate was developed in order to provide senior leaders within DOD with information in a short time frame to consider how much could be saved through a furlough as part of an effort to meet sequestration cost-reduction targets. The average estimated cost-savings per person per day was developed prior to the identification of exceptions to the furlough and used aggregated payroll data from Defense Finance and Accounting Service and civilian personnel data from the Defense Civilian Personnel Data System. Upon directing a furlough of 11 days for most civilian personnel in May 2013, the Office of the Comptroller estimated a cost savings of approximately $2.1 billion. The Office of the Comptroller developed the $2.1 billion estimate by multiplying the estimated average savings of $300 per person per day by the estimated total number of civilians being furloughed. In the same memorandum directing the 11 day furlough, the Secretary of Defense included categories of exceptions to the furlough. When the Office of the Comptroller developed the $2.1 billion estimated savings it accounted for exceptions in their estimated total number of civilians being furloughed, but not in its average estimated cost-savings per person per day. DOD’s total estimated cost savings was not as accurate as it could have been because it did not account for excepted employees in its average per person per day estimated cost-savings. As stated above, DOD excepted142,602 civilian employees, or approximately 18 percent of the civilian workforce from the furlough. The civilians who were excepted may have had higher salaries or lower salaries; thus, these exceptions may have affected the average per day savings. Further the per person per day cost savings effects the total estimated savings. For example, assuming the same number of civilians DOD used to calculate its estimated cost savings were furloughed for 11 days, a $10 difference in estimated average savings per person per day changes the total estimated savings by approximately $72 million. Officials from selected sites discussed examples of impacts that resulted from the furlough. Specifically, some officials we interviewed at selected sites discussed actions taken to prepare for or mitigate potential impacts resulting from the furlough such as proactive planning efforts, identified efficiencies, and use of cost savings to offset unfunded requirements. Officials we interviewed also described specific impacts that they believe can be attributed to the furlough, such as decline in civilian morale, attrition, mission delays, inconsistencies and clarification issues with the furlough guidance, and impacts on servicemembers’ morale. However, measuring the direct impact of the furlough is difficult since it was a part of a broader set of sequestration actions that included a civilian hiring freeze, limits on overtime, and termination of temporary and term hires, as well as other non-sequestration-related personnel actions, such as a 3-year pay freeze between 2011 and 2013. Further, DOD civilians filed over 32,000 appeals to the Merit Systems Protection Board related to the furlough in 2013. The following are examples of actions taken to prepare for or mitigate potential impacts reported by officials at the locations we visited from the implementation of the furlough: Proactive Planning and Furlough Tracking—Some officials described proactive planning efforts that took place at their sites to prepare for the furlough. For example, Brooke Army Medical Center officials reported setting up a team in February 2013 to conduct worst-case scenario planning and determine mission priorities for adequate staffing to help ensure high quality of patient care for a potential furlough. This team was then able to identify individuals for exception to the furlough based on their planning efforts once they received the furlough guidance designating 11 furlough days and categories of exceptions. Some officials from all three sites also described efforts to capture potential and realized impacts from the furlough through various reporting mechanisms, such as a furlough impact log. Identification of Efficiencies—Some officials provided examples of individual command efforts to identify and implement efficiencies during the furlough. For example, some officials at all three sites noted that because of the limitations placed on overtime during sequestration and the added impact of the furlough on civilian staff, approval of overtime was scrutinized at a higher level than before. As a result, officials at these sites gained a better awareness of the appropriate use of overtime and reported reductions in the use of overtime. In addition, officials from Brooke Army Medical Center’s Emergency Medicine Department stated that they were able to defer the size of their routine supply purchases after prioritizing spending on mission-essential needs during the furlough. Use of Cost Savings to Offset Unfunded Requirements—Some officials at Norfolk Naval Shipyard and Air Mobility Command described using the cost savings realized from the reductions in civilian pay due to the furlough to apply towards other unfunded requirements. Officials from the Department of the Navy stated that the individual commands were allowed to use the money saved from the furloughs based on individual priorities. Brooke Army Medical Center reported an estimated return of about $3.4 million as furlough days were reduced from the initially planned 22 days to 6 days. Army Medical Command initially withheld civilian pay from the medical facility to account for the estimated cost of furloughing staff for 22 days and later returned the funds to Brooke Army Medical Center as the length of the furlough was reduced. The following are examples of impacts reported by officials at the locations we visited that they believe can be linked to the implementation of the furlough: Decline in Morale—Officials at all three sites stated that civilian morale declined due to the civilian workforce furlough that resulted in a 20 percent reduction in pay per week for 6 weeks. This was further exacerbated as some civilians were excepted from furlough while other civilian colleagues were not, contracted support staff continued working, and some civilians who were historically deemed “mission essential” and required to report to the office for events, such as snow days, were now furloughed. For example, officials at Norfolk Naval Shipyard reported civilians furloughed within the supporting commands experienced frustration and a decline in morale as their civilian colleagues working in the shipyard were not only excepted but were also working overtime during the furlough period. Officials at Brooke Army Medical Center described a decline in morale within the Army inpatient nursing staff because the Air Force excepted its inpatient nursing staff from furlough while the Army did not.some officials at Brooke Army Medical Center indicated that the furlough affected some patients who tried to get refills on their medication prior to the furlough out of fear that they would not have access to care during the furlough period. In addition, officials at Air Mobility Command described a decline in morale among civilian staff who had to take a pay cut while contracted support staff did not. Further, officials at Air Mobility Command described instances where some civilians historically considered “mission essential,” such as air traffic controllers and firefighters, were now furloughed. Officials at the sites we visited stated that they followed the Secretary of Defense’s guidance and did not use borrowed military personnel to compensate for work that would have been conducted by furloughed civilians. Some officials stated that servicemembers experienced a decline in morale as they worked longer hours to complete their missions in the absence of civilians who were furloughed. For example, at Brooke Army Medical Center, officials stated that they relied on their military medical staff to work during the furlough. Officials stated that their use of military personnel only extended to those personnel who were assigned to their unit and that they did not borrow personnel from other units. While the term “borrowed military personnel” is not defined in Assistant Secretary of Defense for Readiness and Force Management’s June 2013 memorandum, the Army’s definition of “borrowed military personnel” only includes certain uses of military personnel outside of the unit to which they are assigned. In December 2013, DOD reported to Congress that the results of OPM’s recent annual Federal Employee Viewpoint Survey showed its civilian workforce morale had continued to decline, and that DOD expected the furloughs to affect employee recruiting and retention in the future. Of note, the survey showed a decline in satisfaction among DOD respondents to questions that dealt with job satisfaction (decline from 71 percent in 2010 to 64 percent in 2013), pay (decline from 65 percent in 2010 to 53 percent in 2013), and satisfaction with the organization (decline from 63 percent in 2010 to 55 percent in 2013). DOD identified several examples of efforts it was taking to minimize any negative impact on the morale of the civilian workforce and long-term consequences on recruiting and retention of the civilian workforce. Most of these actions were high-level, such as continuing to focus on the Strategic Workforce Plan and conducting leadership development programs for entry, mid-, and senior-level personnel. Other examples of actions DOD noted it was taking to address morale included initiating an enterprise strategic recruitment effort and the development of a new performance appraisal system. However, the letter does not provide specifics about these actions. For example, it does not address how the Strategic Workforce Plan would minimize the negative impact on morale of the civilian workforce. The letter also does not provide time frames for when these actions will be completed. Attrition as a Result of the Furlough: Officials at Brooke Army Medical Center and Norfolk Naval Shipyard stated a number of examples where employees left as a result of the furlough. For example, some officials at Brooke Army Medical Center we interviewed stated that they knew of colleagues who left the hospital to work at the Department of Veterans Affairs since it did not furlough its staff. In August 2013, the Army Surgeon General made a statement that, during 2013, 2,700 Army civilian medical doctors, nurses, and other health workers left their jobs for work elsewhere due to the furlough, many transferring to the Department of Veterans Affairs. We examined the attrition rate of civilian personnel at Army Medical Command and DOD between fiscal years 2009 and 2013. Specifically, for the Army Medical Command, we found that attrition rates for on-board civilian medical officers and nurses peaked at 22 percent in fiscal year 2011. For more information on Army Medical Command and DOD component attrition rates, see appendix III. Mission Delays—While none of the selected sites we visited indicated mission failure as a result of the furlough, some officials described increased challenges in meeting their missions. Defense Logistics Agency, Maritime support command, officials from the Norfolk Naval Shipyard stated that, during the furloughs, they experienced an increased backlog in providing goods and services in support of shipyard operations. Specifically, the Defense Logistics Agency, Maritime support command, reported that the backlog of requests to provide goods and services nearly doubled during the furlough period, from 330 outstanding requests on July 3, 2013, to a peak of 614 outstanding requests on July 29, 2013, before dropping down to 465 outstanding requests by August 13, 2013, as the furlough drew to an end for most civilians. Norfolk Navy Shipyard officials stated that a building had a fire alarm malfunction on a Friday during the furlough period and, because civilian staff were furloughed, no one was able to fix it until the following Tuesday, so the building had to establish a 24- hour watch over the weekend to ensure a potential fire could be reported. Air Mobility Command officials described delays in permanent changes of station because the furlough occurred in summer—the peak season for such moves. These officials explained that delays in permanent changes of station can impact a military servicemember’s ability to report to his or her next installation on time. Guidance Challenges—Some officials stated that they were confused by the guidance that was provided on implementing the furlough, while others expressed frustration at the volume of updates to the guidance. For example, at Brooke Army Medical Center, the Air Force had not yet transitioned its civilians to Army control through the Base Realignment and Closure process, and therefore Army and Air Force civilians were operating under separate guidance during the furlough. This added to the administrative burden of management at Brooke Army Medical Center and confusion among staff who work side-by-side. Specifically, Air Force staff decided to except all of their in-patient nurses from furlough, while the Army furloughed its in- patient nurses. Further, officials at Brooke Army Medical Center and Air Mobility Command stated that they received numerous updates to the furlough guidance, often on a daily basis and from multiple sources. Officials expressed confusion and sought clarification over the terms used in the furlough guidance, such as “borrowed military personnel,” “mission essential,” and “24-hour emergency care.” For example, the term “borrowed military personnel” was not defined in the Assistant Secretary of Defense for Readiness and Force Management’s June 2013 memorandum regarding the use of borrowed military personnel during the furlough. Longer Term Impact from DOD Civilian Appeals Filed to Merit Systems Protection Board—DOD federal civilians filed 32,259 appeals regarding the administrative furlough to the Merit Systems Protection Board. Once DOD began implementing the furlough on July 8, 2013, DOD civilians were then eligible to file appeals of the furlough action to the Merit Systems Protection Board. Figure 1 illustrates the process for filing and adjudicating appeals with the Merit Systems Protection Board and the status of the DOD civilians’ appeals to the administrative furlough as of March 31, 2014. All of the 1,101 cases that have been adjudicated to date have been decided in DOD’s favor, though the Merit Systems Protection Board has received 8 petitions for review from DOD civilians who have chosen to appeal the Administrative Judge’s decision in their case. According to the Merit Systems Protection Board, their current workload is unprecedented, as they received over 32,000 furlough appeals from DOD employees alone in 2013—approximately 5 times the number of personnel appeals they typically receive in 1 year. As a result, the Merit Systems Protection Board is unable to predict how long it will take to adjudicate all of the DOD furlough appeals, but the Merit Systems Protection Board has committed to issuing initial decisions in all furlough appeals by the end of fiscal year 2015. In light of ongoing fiscal uncertainty, and given the toll that furlough actions can have on mission needs and employee morale, among other things, it is important that DOD accurately estimate financial actions that affect its personnel and update these estimates to ensure the most timely and reliable information is available for effective planning. This includes taking actions aligned with Standards for Internal Control in the Federal Government, such as identifying, capturing, and distributing information to the right people in sufficient detail and at the appropriate time to maintain its relevance and value to management in controlling operations and making effective and efficient decisions. DOD’s approach to furloughing did not adjust to accommodate decisions made to except certain civilian employees from furlough. Further, because DOD only had 1 week’s worth of civilian payroll data at the time it reduced the number of furlough days, it did not track cost savings in real time. Such information could be considered during any future administrative furlough deliberations to enable DOD leadership to make informed decisions by having reliable and accurate cost-savings information as it becomes available. In light of the current fiscal environment, it is even more critical for DOD to accurately identify its current and future total workforce priorities and associated costs. While DOD was able to mitigate the furlough as a result of transfer and reprogramming actions, DOD may face future furloughs where it may be limited in how much funding is available to transfer and reprogram and the length of a potential furlough period may be longer, thus having comprehensive, up-to-date information for decision makers would be important. To help ensure that DOD is better informed in its decision-making processes, we recommend that the Secretary of Defense direct the Under Secretary of Defense (Comptroller) and the Under Secretary of Defense for Personnel and Readiness to utilize comprehensive and up-to-date furlough cost-savings information as it becomes available in the event that DOD decides to implement another administrative furlough in the future. We provided a draft of this report to DOD for review and comment. In its written comments, DOD partially concurred with the recommendation to utilize comprehensive and up-to-date furlough cost-savings information as it becomes available in the event that DOD decides to implement another administrative furlough in the future. DOD’s comments are summarized below and reprinted in appendix IV. In its written comments, DOD did not elaborate on why it partially concurred with the recommendation. DOD stated that it had several concerns with the findings in the report. The department stated that important contextual information regarding the size of the total force was not included in the draft report and elaborated on reasons for growth in the civilian workforce after the events of September 11, 2001. DOD stated that without context, readers may believe that the DOD civilian workforce is not thoughtfully and purposefully sized. However, we disagree with DOD’s characterization of the draft report. The draft report states that DOD’s civilian personnel are critical to achieving the department’s missions by performing a wide variety of duties, and the report acknowledges that civilians have expanded their responsibilities. Further, the focus of this report was not on DOD’s total workforce management but how DOD planned for, implemented, and monitored furloughs of its civilian workforce to include any challenges the department faced in its implementation and cost savings realized. Nonetheless, we have conducted assessments of DOD’s strategic workforce plans since 2008, and our body of work has found that DOD’s efforts to address strategic workforce planning requirements, including assessing the appropriate mix of civilians, military and contractor personnel, have been mixed. For example, in our most recent report in September 2012, on the department’s overall civilian strategic workforce plan, we recommended that DOD take a number of actions, including providing guidance for developing future strategic workforce plans that clearly directs the functional communities to collect information that identifies not only the number or percentage of personnel in its military, civilian, and contractor workforces, but also the capabilities of the appropriate mix of those three workforces. DOD either concurred or partially concurred with our recommendations, stating that, among other things, the department was deliberate in applying lessons learned from previous workforce plans and identifying specific challenges and the actions being taken to address those challenges to meet statutory planning requirements by 2015. Our review on DOD’s latest strategic workforce plan will be issued in July 2014. Although DOD did not specifically state in its letter why it partially concurred with the recommendation, DOD provided comments related to the recommendation, and we have addressed them throughout the report as appropriate. However, we disagree with two of DOD’s specific comments as discussed below: DOD commented that we should delete the report’s discussion regarding DOD being placed on our High Risk List because of financial management weaknesses that affect its ability to control costs; ensure accountability; anticipate future costs and claims on the budget; detect fraud, waste, and abuse; and prepare auditable financial statements. DOD stated in its comments that this paragraph “is unrelated to this report on administrative furloughs.” We disagree. We believe that having accurate financial information is not only related but very important to the report on administrative furloughs. Specifically, DOD made the determination to furlough civilians in response to budgetary shortfalls, which was part of a larger effort to achieve specific funding reductions resulting from sequestration. This decision affected its approximately 770,000 civilian workers – which included 642,404 civilians being furloughed for 6 days and 142,602 civilians being excepted. We believe that when making decisions with the goal of reaching a financial target that negatively affects so many people—including a 20 percent reduction in pay for 6 weeks—DOD’s ability to accurately account for spending or assets is an important factor related to this report. Further, in DOD’s comments, it states that it could not track cost savings in real time due to system and process limitations. We believe that this further illustrates the relevance of having accurate, complete, and timely financial information available to make management decisions. DOD commented that we misrepresented information regarding DOD’s cost savings estimates and recommended alternative language for the report. Specifically, DOD stated that it excluded employees categorized as exempt from the cost savings estimate of $2.1 billion provided to Congress as well as the known exceptions as part of the per day cost projection developed using March payroll data. Similarly, DOD commented that the report misrepresented information provided during various meetings. We disagree. We disagree with DOD’s characterization of our report. Our report accurately reflects information included in DOD’s documents related to how it calculated its estimated furlough savings and associated documentation it provided to Congress. As we stated in the report, DOD calculated a cost estimated savings of $300 per person per day and used this estimate in discussions including the initial decision to furlough until it decided to reduce the number of furlough days from 11 to 6 days even though additional information was available regarding which civilians DOD excepted, as the exceptions decision was made 3 months earlier. However, DOD did not initially include or update the estimated savings per person per day of $300 to account for the 142,602 civilians that were excepted from the furlough. These civilians excepted from the furlough represent approximately 18 percent of the total civilian workforce.savings as a result of the civilian workforce furlough, DOD multiplied the estimated number of civilians to be furloughed by the estimated savings of $300 per person per day. While DOD did adjust the numbers of civilians it included in its calculated cost savings, it never adjusted the per person per day estimate of $300 to account for the 18 percent of the civilian workforce excepted from the furlough. Further, as we state in the report, should DOD need to furlough civilians in the future, the incorporation of information as it becomes available would better inform decision makers because actions taken regarding DOD’s civilian workforce affects approximately 770,000 civilians. To calculate the estimated cost We are sending copies of this report to other interested congressional parties; the Secretary of Defense; the Secretaries of the U.S. Army, the U.S. Navy, and the U.S. Air Force; and, the Commandant of the U.S. Marine Corps. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or at farrellb@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. This report: (1) examines how the Department of Defense (DOD) implemented its civilian workforce furloughs and any reported cost savings, (2) examines the extent to which DOD utilized up-to-date cost- savings information in the planning and implementation of civilian workforce furloughs, and (3) identifies any reported examples of impacts that resulted from the DOD civilian workforce furloughs. To address how DOD implemented its civilian workforce furlough, we obtained and analyzed information and interviewed knowledgeable officials from the Under Secretary of Defense (Comptroller) (hereafter referred to as Office of the Comptroller), the Under Secretary of Defense for Personnel and Readiness, and the Departments of the Army, Navy, and Air Force. We obtained and analyzed guidance and policy documents outlining the furlough decision and subsequent reduction in the number of administrative furlough days. The guidance and policy documentation included the numbers of civilians furloughed, the categorical exceptions granted, and the numbers of civilians provided exceptions. In addition, we examined guidance outlining the decisions directly related to the decision to implement a furlough as well as guidance imposing limitations on utilizing other personnel within the department to augment the civilian workforce during the furlough. We also reviewed the Office of Personnel and Management’s (OPM) Guidance for Administrative Furloughs, June 10, 2013, and prior GAO reports on sequestration and furloughs within the federal government, including GAO’s reports on the 2013 sequestration and DOD’s implementation of the sequestration. To understand how the civilian workforce furlough was implemented at a local level, we conducted site visits of a selected shipyard, a medical facility providing 24-hour support, and an air operations center. Specifically, we visited Norfolk Naval Shipyard, Brooke Army Medical Center at Fort Sam Houston, and Air Mobility Command at Scott Air Force Base. These sites were selected on the basis of the Secretary of Defense memorandum outlining categories of exceptions to the furlough, DOD statements about potential sequestration impacts, and mission- critical occupations as outlined in DOD’s Strategic Workforce Plan. We developed a standard set of interview questions to use in discussions with officials at selected sites regarding what policy and guidance was generated and how the furlough was implemented, such as information about employee furlough notification and scheduling of furlough days, among other things. Information from these sites is not generalizable, but provides examples of how the furlough was implemented at these locations. To examine any reported cost savings that resulted from the DOD civilian workforce furloughs, we obtained and analyzed information and interviewed officials from the Office of the Comptroller regarding DOD’s calculations of the actual cost savings as a result of the administrative civilian workforce furlough. We assessed DOD’s methods for calculating actual cost savings for the furlough; however, we did not independently verify these calculations. The cost savings were calculated from the civilian pay from those who were furloughed and did not account for other costs from implementing the furlough, such as administrative costs. To determine the extent to which DOD utilized up-to-date cost savings information in the planning and implementation of civilian workforce furloughs, we obtained and analyzed information and interviewed officials from the Office of the Comptroller regarding how the department calculated the estimated cost savings for the furlough of civilian personnel in fiscal year 2013. We examined DOD’s methods for calculating estimated cost savings for the furlough; however, we did not independently verify the accuracy of these calculations. We also reviewed Standards for Internal Control in the Federal Government for best practices on using information in decision-making processes. To identify any reported examples of impacts that resulted from the DOD civilian workforce furloughs, we obtained and analyzed information and interviewed knowledgeable officials at each of these sites—Norfolk Naval Shipyard, Brooke Army Medical Center at Fort Sam Houston, and Air Mobility Command at Scott Air Force Base. We developed a standard set of interview questions to use in discussions with selected site officials regarding any impacts from the furlough in areas such as morale, guidance, communication, and mission, among other things. Information from these sites is not generalizable, but provides examples of impacts of the furlough reported at these locations. We also reviewed the results of OPM’s 2013 Federal Employee Viewpoint Survey for DOD. While not specifically addressing sequestration, the survey captures employees’ general perceptions in areas including their work experiences and their agency that could be affected by sequestration. To assess the reliability of the survey data, we reviewed reports and other descriptions of the survey methodology available on the OPM website: http://www.fedview.opm.gov/2013/. To analyze workforce and turnover trends from fiscal year 2009 through 2013, we used OPM’s Enterprise Human Resources Integration Statistical Data Mart (EHRI-SDM), which contains personnel action and on-board data for most federal civilian employees. We analyzed agency- level EHRI-SDM data for all DOD departments (Army, Navy, Air Force, and other DOD agencies). We focused on career permanent employees in our analysis of on-board and separation trends because these employees represent the long-term employee population and constitute most of the workforce. To calculate attrition rates, we added the number of career permanent employees with personnel actions indicating they had separated from one of the DOD departments (for example, transfers, resignations, retirements, terminations, and deaths) and divided that by the 2-year on-board average. We assessed the reliability of the EHRI data through electronic testing to identify logical inconsistencies, and followed up with DOD, where necessary, to understand these issues. We also reviewed our prior work assessing the reliability of these data. On the basis of this assessment, we believe the EHRI data we used are sufficiently reliable for the purpose of this report. Further, we interviewed officials and obtained information from the Merit Systems Protection Board on the appeals adjudication process and the status of appeals filed by DOD civilians regarding the furlough in fiscal year 2013. We conducted this performance audit from July 2013 to June 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In order to minimize adverse effects on mission, the Secretary of Defense memorandum issued on May 14, 2013, granted exceptions to the furlough. Below are the categories of exceptions outlined in the Secretary’s memorandum: Combat Zone: All employees deployed (in a Temporary Duty status) or temporarily assigned (to include Temporary Change of Station) to a combat zone. Safety of Life and Property: Those employees necessary to protect safety of life and property, including selected medical personnel. The exceptions were to be granted with the understanding that these were the minimum exceptions needed to maintain operations and provide security on a 24/7 basis. Similarly, the exceptions for the medical category were to be approved with the understanding these exceptions preserve the minimum level of personnel needed to maintain quality of care in 24/7 emergency rooms and other critical care areas such as behavioral health, wounded warrior support, and disability evaluation. Shipyards: Employees in Navy shipyards. All other depot employees, whether mission-funded or working capital fund employees, were subject to furlough. Intelligence: Furloughs for employees funded with National lntelligence Program funds were determined by the Director of National Intelligence.Program funds were subject to furlough. Employees funded with Military Intelligence Foreign Military Sales: Foreign Military Sales employees whose positions were exclusively funded from Foreign Military Sales Administrative case funds, Foreign Military Sales case funds, and from Foreign Military Financing accounts. In addition, the Foreign Military Sales case-funded positions funded in whole or part by DOD appropriations (to include “pseudo–Foreign Military Sales” cases) were subject to furlough. All individuals appointed by the President, with Senate confirmation, who were not covered by the leave system in title 5, U.S. Code, chapter 63, or an equivalent formal leave system. All employees funded by nonappropriated funds (regardless of source of nonappropriated funding). All outside-the-contiguous United States foreign national employees. Any employees who were not paid directly by accounts included in the Department of Defense–Military budget, such as employees funded by the Arlington National Cemetery and DOD Civil Works programs. The exception for Child Development Centers was granted with the understanding that this was the minimum level needed to maintain accreditation and maintain high-quality care for children in military families. Some Department of Defense Education Activity employees, while not excepted from furlough, may have only been furloughed when they were in a pay status. Therefore, they were only subject to furlough for up to 5 days at the beginning of the 2013 school year. The Secretaries of the military departments and the Principal Staff Assistants for the defense agencies and field activities, may have approved up to 50 additional individual, mission-based, exceptions as needed. In examining the attrition rate for on-board civilian medical officers and nurses at Army Medical Command between fiscal years 2009 and 2013, we found that the attrition rate of on-board civilian nurses and medical officers at Army Medical Command peaked in fiscal year 2011 at 22 percent and rose again in fiscal year 2013 to 14 percent after declining in 2012, compared to a 10 to 11 percent attrition rate in fiscal year 2009 (see fig. 2 below). Many factors can affect attrition which may be unrelated to job satisfaction and events such as the furlough. For example, according to Army Medical Command officials, the command was affected by the 2005 Base Realignment and Closure process, the deadline for completion of which was in September of fiscal year 2011. The Base Realignment and Closure Commission’s recommendations included transferring personnel from Walter Reed Army Medical Center and Belvoir Army Community Hospital to the purview of what is now Defense Health Agency’s National Capital Region Medical Directorate. In further examining attrition rates across the DOD components between fiscal years 2009 and 2013, we found the Army overall experienced a similar peak in attrition rates in 2011, at 11 percent. In addition, the Air Force’s attrition rates peaked in fiscal year 2012 (9 percent), with the Navy’s attrition rates increasing between fiscal years 2010 and 2011 (from 6 to 7 percent). Overall, during fiscal year 2013, DOD components had an attrition rate between 7 percent and 9 percent of on-board civilian employees, compared to an attrition rate between 6 percent and 8 percent in 2009; see figure 3 below. In addition to the contact named above, Lori Atkinson, Assistant Director; Arkelga Braxton; Tim Carr; Grace Coleman; Cynthia Grant; Amber Lopez Roberts; Rebecca Shea; Norris “Traye” Smith; Amie Steele; John Van Schaik; and Michael Willems made key contributions to this report. 2013 Sequestration: Agencies Reduced Some Services and Investments, While Taking Certain Actions to Mitigate Effects. GAO-14-244. Washington, D.C.: March 6, 2014. Sequestration: Observations on the Department of Defense’s Approach in Fiscal Year 2013. GAO-14-177R. Washington, D.C.: November 7, 2013. Financial and Performance Management: More Reliable and Complete Information Needed to Address Federal Management and Fiscal Challenges. GAO-13-752T. Washington, D.C.: July 10, 2013. Human Capital: Additional Steps Needed to Help Determine the Right Size and Composition of DOD’s Total Workforce. GAO-13-470. Washington, D.C.: May 29, 2013.
In March 2013, DOD's discretionary budget was reduced by $37 billion as a result of sequestration—across-the-board spending reductions to enforce certain budget policy goals. In response, the Secretary of Defense implemented an administrative furlough, among other things by placing most of DOD's civilian personnel in a temporary nonduty, nonpay status. An administrative furlough is a planned event by an agency to absorb reductions due to budget situations other than a lapse in appropriations. GAO was mandated to review DOD's implementation of its administrative furlough. This report (1) examined how DOD implemented its furloughs and any reported cost savings, (2) examined the extent to which DOD utilized up-to-date cost-savings information in the planning and implementation of furloughs, and (3) identified any reported examples of impacts that resulted from the furloughs. GAO reviewed DOD furlough guidance, interviewed officials, and conducted visits at selected sites that were selected to represent different categories of furlough exceptions and potential sequestration impacts, among other things. In January 2013, the Department of Defense (DOD) instructed components to plan for the possibility of up to a 22-day administrative furlough of civilian personnel. On May 14, 2013, the Secretary of Defense issued a memorandum directing up to an 11-day furlough of most of DOD's civilians, and on August 6, 2013, reduced the number of furlough days to 6, resulting in a cost savings of about $1 billion from civilian pay, excluding implementation costs. DOD officials stated the decision to reduce the number of furlough days was due to DOD gaining greater flexibility from fund transfers and reprogrammings that occurred towards the end of the fiscal year. DOD identified categories of furlough exceptions for personnel including those assigned to a combat zone and those necessary to protect safety of life and property. Clarifying guidance was issued to help ensure that borrowed military personnel were not used to compensate for work resulting from the furlough, and to prohibit contracted support from being assigned or permitted to perform additional work or duties to compensate for workload or productivity loss resulting from the furlough. Ultimately, DOD furloughed 624,404 civilians and excepted 142,602 civilians for 6 days. DOD developed its initial estimated cost savings for the furlough without excluding pay for those excepted from the furlough and did not update its estimate throughout the furlough period as more information became available, such as real-time cost savings and when subsequent decisions were made to reduce the number of furlough days. The initial estimated cost savings were calculated at $300 per person per furlough day, totaling about $2.1 billion for 11 furlough days. When DOD reduced the furlough from 11 to 6 days, the estimated cost savings were reduced by about $900 million. However, the estimated savings per person per day was not updated to reflect actual payroll reductions, in part because, according to DOD officials, there was only 1 week's worth of payroll data available at the time the decision was made. While officials stated that the estimated savings per person per day was not updated because they thought it was sufficient for their purposes and that the decision to reduce the number of furlough days was primarily based on funding received from transfers and reprogramming actions, the determination of exceptions was made 3 months earlier. If this initial estimate had been updated it may have provided more-comprehensive information for DOD officials to consider regarding the length of the furlough and DOD's cost-savings estimate. As DOD continues to face budgetary uncertainty, and in the event of a future furlough, having comprehensive and updated cost information may help better inform decision makers. Officials at selected sites GAO visited noted a number of actions taken to prepare for the furlough and described impacts of the furlough, such as decline in morale, mission delays, and inconsistencies and clarification issues with the furlough guidance. However, attributing these impacts directly to the furlough is difficult given other factors, such as a civilian hiring freeze and pay freeze that may also have contributed to declining morale. For example, satisfaction with the organization had declined from 63 percent in 2010 to 55 percent in 2013. Furthermore, a longer term impact may result from DOD civilians filing over 32,000 appeals related to the administrative furlough in 2013, most of which have not yet been resolved. GAO recommends that DOD update and utilize its furlough cost-savings information as it becomes available in the event that it decides to implement another administrative furlough in the future. DOD partially concurred. GAO continues to believe the findings and recommendation are valid, as discussed in the report.
VHA’s Family Caregiver Program is designed to provide support and services to family caregivers of post-9/11 veterans who have a serious injury incurred or aggravated in the line of duty. The program provides approved primary family caregivers with a monthly financial stipend as well as training and other support services, such as counseling and respite care. The Family Caregiver Program has a series of eligibility requirements that must be satisfied in order for family caregivers to be approved. To meet the program’s initial eligibility criteria the veteran seeking caregiver assistance must have a serious injury that was incurred or aggravated in the line of duty on or after September 11, 2001. According to the program’s regulations, a serious injury is any injury, including TBI, psychological trauma, or other mental disorder, that has been incurred or aggravated in the line of duty and renders the veteran or servicemember in need of personal care services. The veteran must be in need of personal care services for a minimum of 6 continuous months based on any one of the following clinical eligibility criteria: (a) an inability to perform one or more activities of daily living, such as bathing, dressing, or eating; (b) a need for supervision or protection based on symptoms or residuals of neurological or other impairment or injury such as TBI, PTSD, or other mental health disorders; (c) the existence of a psychological trauma or a mental disorder that has been scored by a licensed mental health professional, with a Global Assessment of Functioning score of 30 or less, continuously during the 90-day period immediately preceding the date on which VHA initially received the application; or (d) the veteran has been rated 100 percent service connected disabled for the veteran’s qualifying serious injury and has been awarded special monthly compensation that includes an aid and attendance allowance. To be considered competent to care for the veteran, family caregivers must meet certain requirements including (1) having the ability to communicate and follow details of the treatment plan and instructions related to the care of the veteran; (2) not determined by VA to have abused or neglected the veteran; (3) being at least 18 years of age; and (4) either being a family member—such as a spouse, son or daughter, parent, step-family member, or extended family member— or an unrelated person who lives or will live full-time with the veteran. Family caregivers must also complete required training before being approved for the program. VHA’s Caregiver Support Program office is responsible for developing policy and providing guidance and oversight for the Family Caregiver Program. It also directly administers the program’s stipend, provides support services such as a telephone hotline and website, and arranges CHAMPVA coverage for eligible caregivers. Furthermore, the office provides funding to VAMCs to cover certain program costs, such as the salaries of the CSCs, who implement and administer the Family Caregiver Program at the local VAMC level, as well as the costs VAMCs incur for having their clinical staff, such as nurses, conduct the program’s required in-home visits to approved caregivers and their veterans. CSCs are generally licensed social workers, clinical psychologists, or registered nurses, and they have both clinical and administrative responsibilities. Their clinical responsibilities may include identifying and coordinating appropriate interventions for caregivers or referrals to other VA or non-VA programs, such as mental health treatment, respite care, or additional training and education. Their administrative responsibilities may include responding to inquiries about the program, overseeing the application process, entering information about applications and approved caregivers into IT systems, and facilitating the processing of appeals. As of May 2014, there were 233 CSCs assigned to 140 VAMCs or healthcare systems across the country. Additionally, each of the 21 regional VISN offices also has a VISN CSC lead for the program, who provides guidance to CSCs and helps address their questions or concerns. Congress authorized over $1.5 billion for the Family Caregiver Program and other caregiver services for fiscal years 2011 through 2015. VHA’s actual and estimated obligations for the program for fiscal years 2011 through 2015 have increased at a steady rate. (See fig.1.) The Caregiver Support Program office uses this funding to cover costs such as program staffing, general caregiver education and training, caregiver stipends, CHAMPVA costs for primary family caregivers, the Caregiver Support Line, the Caregiver website, and outreach materials. It also provides funding to VAMCs to cover certain program costs rather than requiring the VAMCs to pay for them directly from their medical facilities’ budgets. These costs include CSC salaries, reimbursement for home visits, respite care, and mental health services as well as assistance with travel expenses for eligible caregivers when accompanying the veteran to an appointment. The Family Caregiver Program application has a sequential, multistep adjudication process. CSCs are responsible for overseeing this process and for ensuring that all steps of the application process are completed within 45 days, as outlined in the program’s guidance. CSCs are expected to cultivate relationships with VAMC medical staff to request their assistance in performing medical eligibility assessments and completing the home visits—2 key steps in the process. For applications that cannot be fully adjudicated within the program’s 45-day goal, the CSC may request additional time to process the application from the Caregiver Support Program office. The steps of the application process are as follows: Step 1: Application Review. After the caregiver and veteran submit an application for the program, the CSC reviews the application and determines the caregiver’s potential eligibility. Step 2: Initial Eligibility Determination for the Veteran. The CSC then determines whether the veteran is a post-9/11 veteran enrolled in VHA (or servicemember undergoing medical discharge) and has a documented line-of-duty injury. Step 3: Final Eligibility Determination for the Veteran. After initial eligibility has been determined, a VHA medical provider is to complete a medical assessment to determine the medical condition of the veteran, their need for a caregiver, and all other program eligibility criteria. The provider then determines the veteran’s rating for the stipend amount the primary family caregiver is eligible to receive. The stipend amounts are organized into three tiers. The stipend amount that the caregiver could receive is based on the assigned tier level and the geographic location of the veteran’s residence. Tier 3 indicates the highest level of injury and need for a caregiver and has the highest level of payment, while Tiers 2 and 1 indicate the lower levels of injury and correspondingly lower levels of payment. Step 4: Eligibility Determination for the Caregiver. While the veteran’s eligibility is being verified, the CSC determines the caregiver’s eligibility for the program by conducting an assessment of the caregiver’s ability to serve in that role, either through a phone call or in-person meeting. Step 5: Review of Program Services. Once the veteran and caregiver have been determined eligible for the program, the CSC schedules a joint meeting to discuss the types of Family Caregiver Program services for which they may be eligible once they complete the application process and are approved for participation in the program. This would include a discussion of the stipend payment as well as potential coverage through CHAMPVA. Step 6: Caregiver Training. The caregiver must complete the program’s training class, which is offered online, through self-instruction with a workbook and a CD or DVD, or where available, through 2 days of facilitated classroom instruction. The training covers 10 competencies, including self-care, nutrition, and medication management. Step 7: Home Visit. Within 10 days after the caregiver completes the training, an initial home visit is conducted to determine if the caregiver has the physical capacity and skills necessary to provide medical care to the veteran, and if the home is safe and adequately equipped. Since the initial home visit includes physical assessment and medical components, it must be completed by a medical professional such as a registered nurse, nurse practitioner, clinical nurse specialist, physician assistant or physician. When a veteran requires a caregiver due to a mental health diagnosis, the home visit is to be completed by or in collaboration with a mental health provider. Step 8: Notification of Program Eligibility. Once the home visit assessment has been completed and the results confirm that the caregiver is prepared to provide satisfactory care of the veteran, the CSC completes the final approval of the caregiver. This entails sending approval paperwork to the caregiver, including the direct deposit form for the stipend, and updating the relevant VHA systems, including the program’s IT system and the veteran’s medical record. Caregivers who are denied eligibility for the program, or who believe that the veteran’s condition is more severe than the rating indicates, may appeal the decision. CSCs coordinate with VAMC patient advocates and other VHA staff to process these appeals—requests for review and reconsideration—first with the VAMC director and subsequently at the VISN level, if necessary. As of May 2014, approximately 15,600 caregivers had been approved for the program. About 6,000 of these caregivers were assigned to Tier 3 (highest level) for their stipend payments, about 6,000 to Tier 2 (middle level), and 3,600 to Tier 1 (lowest level). The average monthly payments per tier were approximately $2,320 for Tier 3, $1,470 for Tier 2, and $600 for Tier 1. At this time, almost 8 out of 10 of the caregivers approved for the Family Caregiver Program were spouses, while other approved caregivers were parents, relatives, and friends. Most of these caregivers were assisting veterans with mental health diagnoses or brain injuries who may also have had other physical injuries or disabilities. Specifically, 92 percent of these veterans have a service-connected mental health condition, 63 percent have PTSD, and 26 percent have a TBI. The program requires interim quarterly home visits for its approved caregivers, unless otherwise clinically indicated. These visits are to be conducted by clinical staff, but CSCs may also conduct them if they have a clinical background, such as being a registered nurse or a clinical The home visits serve multiple purposes and are intended psychologist.to monitor the well-being of the veteran. They are also used to determine whether the caregiver continues to have the physical capacity and skills necessary to provide medical care to the veteran, and whether the home remains safe and adequately equipped. The Caregiver Support Program office also permits home visits to be conducted by telephone after 1 year of satisfactory home visits has been completed for cases that do not pose exceptional medical risk. VHA officials significantly underestimated the demand for the Family Caregiver Program. As a result, the program did not have sufficient support for the ensuing workload at some VAMCs, and the resulting staffing shortages impeded the timeliness of key functions and negatively impacted services to caregivers. Furthermore, VHA’s Caregiver Support Program office does not have ready access to the type of data that would allow it to monitor and manage the program’s workload due to the limited capabilities of its data system, which was designed to manage a much smaller program. VISN and VAMC officials told us that the program’s initial staffing did not provide sufficient resources to support the unexpectedly high and increasing workload since the program began in 2011. VHA officials originally estimated that approximately 4,000 caregivers would be approved for the program by the end of fiscal year 2014. This estimate was based on the number of expected post-9/11 veterans and servicemembers who have serious medical or behavioral conditions involving impairment in at least one activity of daily living or who require supervision or protection, using available data from the Veterans Benefits However, the number of individuals approved Administration and DOD.for the Family Caregiver Program far exceeded the original estimate: by May 2014, almost 30,400 caregivers had applied and about 15,600 had been approved. (See fig. 2.) Caregiver Support Program officials told us that after 3 years of operation, demand for the Family Caregiver Program remains high: system-wide, there has been no appreciable decrease in the number of caregivers submitting applications for the program. In fact, the number of “in- process” applications for the Family Caregiver Program more than doubled from 1,966 in April 2013 to 4,318 in May 2014. As of May 2014, 98 VAMCs had more than 50 approved caregivers. (See fig. 3.) The initial arrangement for CSCs and VAMC staff that VHA established for this program proved inadequate in the face of such high demand. The program initially placed a single CSC at each VAMC largely to perform administrative and caregiver support functions, with the expectation that each VAMC would provide the program with physician, nursing, and administrative staff as needed to perform specific program functions. However, VISN officials and VAMC officials we spoke with said that there are too few CSCs to handle the program’s workload effectively. Specifically, at some VAMCs, CSCs have been unable to perform all of the routine administrative tasks associated with their approved caregivers, as initially expected. For example, some VISN and VAMC officials told us that the number of appeals filed by veterans and caregivers has become an unexpectedly large component of CSCs’ workload, making it difficult to fulfill the full range of their responsibilities. A Caregiver Support Program official clarified that while CSCs typically work with the VAMC patient advocates to handle appeals, they may be taking on greater responsibility for the appeals process at VAMCs with large numbers of appeals for the program. Caregiver Support Program officials acknowledged that the workload for the Family Caregiver Program has been burdensome for some CSCs, depending on the number of their approved caregivers and the amount of assistance they have at the local level. As of May 2014, the number of approved caregivers per CSC varied widely across VAMCs, ranging from 6 to 251. (See fig. 4.) Caregiver Support Program officials stated that their office does not use a formal CSC-to-caregiver target ratio because staffing decisions are largely the domain of local managers and the use of a specific workload ratio by the Caregiver Support Program could limit VAMCs’ discretion in determining when to request additional CSCs. (See app. I for a list of VAMCs’ CSC-to-caregiver workload ratio.) Furthermore, Caregiver Support Program officials had expected VAMC officials to direct their clinical staff to perform the medical assessments and home visits needed by the Family Caregiver Program as part of their ongoing care to veterans. However, VISN and VAMC officials we contacted told us that their facilities do not have sufficient medical staff to effectively manage the additional workload generated by the Family Caregiver Program, which they view as a collateral duty. According to most VISN and VAMC officials, obtaining clinical staff for the program can be difficult at VAMCs where directors may not consider the Family Caregiver Program to be a high priority. For example, officials at one VAMC told us that lack of support from the VAMC director led to a situation in which the director refused to have nurses conduct home visits for the Family Caregiver Program. At another facility, a VAMC director told us that lack of support by her predecessor led to large backlogs of unprocessed applications and incomplete home visits for the Family Caregiver Program, which she discovered following her recent transfer to that facility. According to some VISN officials, this dynamic sometimes placed CSCs, as the administrators of the local Family Caregiver Program, in the position of pleading for support from VAMC physicians and nurses. In May 2014, a Caregiver Support Program official explained that although the office has issued program policy and guidance to medical facilities, it also plans to issue a directive that outlines organizational responsibilities for the Family Caregiver Program, including those at the VAMC level. This official did not provide a specific timeframe for the issuance of the directive but stated that it would occur following the issuance of final program regulations. VISN and VAMC officials we spoke with noted that the Family Caregiver Program’s approach of having VAMC physicians conduct medical assessments for program eligibility has been a challenge. Physicians at the VAMCs we contacted were already experiencing heavy workloads prior to the implementation of the Family Caregiver Program, and some physicians were not able to take on additional tasks that they viewed as collateral duties, according to VISN and VAMC officials. Some physicians who were initially willing to conduct medical assessments could not continue doing so when the number of applications for the program increased. VISN and VAMC officials also stated that some physicians do not want to perform medical assessments because they are concerned that having a role in determining eligibility for a program that includes a financial stipend could compromise their clinical relationship with the patient. As a result of these factors, the number of physicians willing to conduct medical assessments for the program is limited at some VAMCs. CSCs at some VAMCs told us that the typical wait time for a medical assessment can be a month or longer. VISN and VAMC officials also agreed that providing nurses for the home visits needed by the Family Caregiver Program has posed problems and remains an ongoing challenge. These officials explained that most clinic nurses are already too busy to assume an additional workload. Officials at one VISN told us that some nurses who originally agreed to conduct quarterly home visits for the Family Caregiver Program stopped doing so after home visits became a burden, due to the increasing number of approved caregivers. In addition, VAMC and VISN officials at every location we contacted told us that home visits to remote areas require long driving times, which are challenging to accommodate. Staff at one VAMC we contacted pointed out that their catchment area covers 147,000 square miles, and some of their caregivers live over 8 hours away, requiring nurses to contend with multiple overnight stays per month and dangerous travel conditions in the winter. Timelines for key functions of the caregiver program, such as those for adjudicating applications within 45 days or making quarterly home visits to family caregivers, are not being met because CSCs at some VAMCs were not able to obtain sufficient support from medical facility staff. As a result, some caregivers have had to wait longer for an eligibility determination and to receive program benefits. Staff at many of the VAMCs we contacted told us that delays exist at some or every step of the application process for the Family Caregiver Program, including determining caregiver eligibility, administering medical assessments, and conducting initial home visits. Officials at all of the VAMCs we contacted stated that there were applications at that facility that had been open for longer than 45 days, and one VAMC had over 400 open applications, some going back to June 2013. According to the Caregiver Support Program office, in June 2014, 111 VAMCs had applications that had been in process for 45 to 90 days; most of these facilities (95 percent) had 20 or fewer applications in this category. Additionally, the office reported that 65 VAMCs had applications that had been open 91 days or more; most of these VAMCs (85 percent) had 25 or fewer applications in this category. Furthermore, home visits are not always being made on a timely basis. At one VAMC, initial home visits to assess caregivers’ skills, which are supposed to take place within 10 days of the caregiver’s completion of core training, took from one to two months to complete, which delayed eligibility determinations. Staff at some of the VAMCs we contacted were also struggling to maintain the quarterly schedule for follow-up home visits due to the larger-than-expected number of approved caregivers. At one VAMC, CSCs told us that follow-up home visits occur every 6 to 9 months, in contrast to the program’s standard of every 90 days. Delays in home visits could be problematic because these visits provide medical staff with an opportunity to assess the welfare and environment of the caregiver and veteran—issues that may not be evident during clinic visits, such as whether special dietary needs are being met and whether medications are being properly administered. At some VAMCs, the volume of administrative and procedural activities performed by CSCs has curtailed or even displaced their ability to provide services to caregivers and veterans. VISN staff we spoke with told us that as a result of the high workload burden, CSCs who are overwhelmed do not have the ability to perform some caregiver support functions offered by the Family Caregiver Program, such as support groups and counseling. Caregivers and officials from non-VA organizations told us that some CSCs do not return caregivers’ phone calls. One caregiver recounted that when she became desperate to learn how to manage a veteran with increasingly severe symptoms from a TBI, her CSC told her that hers was one of many requests and that the program could not provide counseling for caregivers. This caregiver subsequently received services from a non-profit organization. Officials from the Caregiver Support Program and the VAMCs we contacted also told us that they have taken steps to address staffing shortages for the Family Caregiver Program, although some of the steps have had limited success. For example, In recognition that some VAMCs had more approved caregivers than originally anticipated, the Caregiver Support Program office began to allow VAMCs to request additional CSCs in August 2011. By May 2014, VAMCs had submitted 99 requests for salary funding to the Caregiver Support Program office for adding one to five more CSCs to their facility. Specifically, the requests from VAMCs totaled 112 additional CSCs, of which the Caregiver Support Program office approved about 94 additional positions. In March 2014, the Caregiver Support Program also began funding temporary CSCs, who were hired for terms of 120 days. As of May 2014, the office has funded 10 CSCs on a temporary basis. According to a Caregiver Support Program official, these CSCs must meet the same qualifications as a full-time CSC, and their duties are to be established at the local level. In addition, some VAMCs have provided their own clerical support to CSCs for routine administrative tasks. In August 2012, the Caregiver Support Program office also began allowing VAMCs to conduct home visits by telephone after 1 year of satisfactory home visits had been completed for cases that do not pose exceptional medical risk. Some of the VAMCs we contacted are planning to expand their use of follow-up telephone contact with caregivers in lieu of in-person home visits. However, officials at one VAMC told us that they did this with only three to four families because they considered almost all families approved for the Family Caregiver Program to be at risk because of the high proportion of caregivers who were caring for veterans with PTSD who were not clinically stable. VAMCs have also tried various approaches for improving physicians’ willingness and ability to participate in the Family Caregiver Program. For example, officials from the Caregiver Support Program office told us that some VAMCs use a multidisciplinary team—instead of individual physicians—to make determinations for eligibility and for the level of financial stipend, enabling the workload to be shared by multiple clinicians. Another VAMC hired a physician on a part-time basis just to perform eligibility examinations for the program. One VAMC with serious workload backlogs is examining the option of using physicians who are already under contract for conducting medical assessments for determining disability benefits, instead of using VAMC treatment physicians. The VAMC director who is exploring this option mentioned that this approach could also resolve physicians’ concerns about compromising the physician/patient relationship posed by determining eligibility for a program that includes a financial stipend. However, Caregiver Support Program officials stated that regardless of the approaches VAMCs may take to conduct the medical assessments, the program’s regulation requires that the physician making eligibility determinations for the Family Caregiver Program be a member of the veteran’s treatment team. To increase nurses’ willingness to provide assistance to the program, some VAMCs offered their nurses overtime pay to conduct home visits, and other VAMCs made temporary work-sharing arrangements with nearby VAMCs for nursing coverage. Officials in the VISNs we contacted told us that some VAMCs have used the funding they received for additional CSC positions to hire nurses for the sole purpose of conducting home visits for the program because of the heavy workload. Some officials also told us that they consider the home visit reimbursement amounts to be insufficient to cover their expenses, such as for GPS units, other electronic devices, and time needed for associated administrative activities. However, an official with the Caregiver Support Program office stated that this should not be necessary because the program’s reimbursement for home visits covers salary expenses, travel costs, and time for administrative activities. Some VAMCs have also hired contractors to conduct home visits, although contractors may not be available in some locations. A Caregiver Support Program official stated that the budget for the Family Caregiver Program has been adequate to meet operating costs as of March 2014. Nonetheless, VHA officials at all levels told us that VAMC directors were cautious about requesting additional CSCs or hiring additional nurses for making home visits. These officials explained that VAMC directors are concerned that when the Family Caregiver Program’s initial 5-year budget authorization expires, the cost of the additional nurses and CSCs could shift from the Caregiver Support Program office to the VAMCs. VAMC directors stated that their caution is based on experience, in that this shift has occurred in the past with other new VHA programs that had initially received funding support from their program offices. Caregiver Support Program officials acknowledged VAMC officials’ concerns about the program’s funding and added that they are aware of some VAMCs—even facilities with growing numbers of approvals—that are cautious about requesting additional CSCs. Nonetheless, a Caregiver Support Program official stated that VHA continues to request funding for the Family Caregiver Program, including a funding request for fiscal year 2015 that was submitted with the budget request. Notwithstanding incremental efforts to improve staffing levels for the program at some VAMCs, CSCs and VAMC staff predict that staffing shortages and the ensuing workload problems are likely to recur because VHA’s current staffing of the program is not sufficient and overall approvals continue to increase at a steady rate—about 500 approvals per month. As a result, according to VAMC officials, some facilities have not been able to overcome the workload problems that developed upon program implementation. A Caregiver Support Program official stated that program officials recognize the need to formally re-evaluate key aspects of the Family Caregiver Program, including program staffing and the processes for eligibility assessments and home visits, in light of the fact that the program was designed to manage a much smaller caregiver population. This is consistent with federal internal control standards, which emphasize the need for effective and efficient operations, including the use of agency resources such as human capital. The Caregiver Support Program office does not have ready access to the workload data that would allow it to monitor the effect of the Family Caregiver Program on VAMCs’ resources due to limitations with the Caregiver Application Tracker—the IT system that was established for the program. According to federal standards for internal control, agencies should identify, capture, and distribute pertinent information in a form and time frame that permits officials to perform their duties efficiently.However, the Caregiver Support Program office is not able to easily retrieve data that would allow it to better assess workload trends at individual VAMCs—such as the length of time applications are delayed or the timeliness of home visits—even though these data are already captured in the Caregiver Application Tracker. Consequently, Caregiver Support Program officials only retrieve these data on an ad hoc, as- needed basis, which limits their ability to assess the scope and extent of workload problems comprehensively at individual VAMCs and on a system-wide basis. A Caregiver Support Program official told us that the office becomes aware of workload problems at some VAMCs through various informal information channels, such as CSCs’ requests for application extensions and communication with the CSCs and VISN CSC leads. relying on informal information channels does not provide the office with a comprehensive picture of the program’s workload across all VAMCs, and this puts it in a reactive position of addressing workload issues after problems have already developed. Having a system that allows for easy retrieval of data would better position the office to proactively identify both existing and potential workload problems at VAMCs and work with their CSCs to identify solutions before problems develop or worsen. It would also facilitate access to data needed for pinpointing where certain processes may be getting stalled. For example, a Caregiver Support Program official told us that it would be helpful to be able to track the status of the various phases of the application process to identify the phases that are taking too long, which would help the office to better determine how to improve the overall timeliness of application adjudication. According to this official, in these instances, they provide coaching and support to the CSC and VISN CSC lead and may work with them on identifying solutions, which may include the development of an action plan with the support of VAMC leadership. caregivers between VAMCs and other VHA entities, including the Health Administration Center, which processes the caregiver stipend payments and administers CHAMPVA. CSCs are responsible for entering nonclinical data into the system, which includes information related to the application process and the status of home visits. The Caregiver Support Program office manages the system and uses its data to monitor certain aspects of the program. Specifically, a program official explained that the system can generate a few basic reports that have been preprogrammed, including a weekly report with aggregate data on the status of applications and stipend payments. The Caregiver Support Program office uses these data to monitor the program as well as data from other sources, including data on the number of approved caregivers who have completed training and the number of telephone calls to the Caregiver Support Line. However, data that are not contained in the preprogrammed reports must be extracted from the system on an ad hoc basis. Caregiver Support Program officials told us that they take steps to validate the data they obtain from the system because they have observed some inconsistencies—particularly with ad hoc data—and as a result, they have concerns about its reliability. These officials told us that each time they extract ad hoc data from the system, they validate the data through additional sources to ensure its accuracy. Caregiver Support Program officials explained that they have already taken steps to verify the sources of the data that are used for the system’s reports and do not need to verify these data every time a report is generated. Nonetheless, these officials noted that they will periodically compare the current weekly data that is reported by the system with data from the prior week to ensure that there are no drastic changes which would indicate a need for additional verification. Program officials explained that the system has no agility or flexibility to perform additional tasks beyond its basic tracking functions, and retrieving data from the system on an ad hoc basis often requires time- consuming manual procedures. These officials explained that the system’s data files are organized by veteran, and all of the veterans who apply for the program are captured in the system whether or not their caregivers were approved. As a result, the system that was designed to manage 5,000 records by the end of 2015 had over 30,000 records as of May 2014. Officials said that the system’s limited capabilities became more apparent as the number of records in the system increased, which made retrieving data on an ad hoc basis more difficult and time consuming. For example, according to program officials, in response to our request for the number of VAMCs with applications over 45 days old, they had to download all of their relevant data into a spreadsheet, review the data for accuracy, make the necessary corrections, and then manually count the number of applications over 45 days old. A Caregiver Support Program official told us that it took three people about 8 to 10 hours in total to pull this information together. Officials further explained that the Caregiver Application Tracker is a stand-alone system that is not integrated with other VHA systems, and as a result, it cannot perform sophisticated functions or searches that would require pulling information from these other systems. Officials told us that this hinders their ability to monitor certain aspects of the program and results in time-consuming efforts to compile program-related data. For example, the use of respite care—one of the benefits of the Family Caregiver Program—is tracked by a different VHA system. To determine how many veterans in the Family Caregiver Program are using respite care, program officials told us that they must download their data into a spreadsheet and then upload this information to the IT system for respite care use in order to crosswalk the information. Furthermore, data on Family Caregiver Program appeals are maintained in the Patient Advocate Program’s tracking system, which is also managed by a different VHA office. Caregiver Support Program officials told us that they have to request appeals data from this office, and to date, there have been a few requests for caregiver appeals data in response to congressional inquiries. However, because the Patient Advocate Program’s tracking system was not designed in a way that allows them to easily retrieve information that is specific to the Family Caregiver Program, the Caregiver Support Program office had received a report for only one of the inquiries as of May 2014. A Caregiver Support Program official told us that they are working with the Patient Advocate Program to identify methods to obtain the appeals data they need, such as by capturing whether an appeal is related to the Family Caregiver Program. As a result of these system limitations, the Caregiver Support Program office does not have the capability to routinely track and analyze the type of workload data it needs to produce a meaningful assessment of the program’s impact on VAMCs. According to federal standards for internal control, agencies should conduct monitoring activities to assess the quality of performance over time and should use the results to correct identified deficiencies and make improvements. However, the lack of ready access to comprehensive workload data impedes the program office’s ability to proactively identify and correct workload problems as they manifest or to identify and make modifications as necessary to ensure that the program is appropriately structured to meet caregivers’ demand for its services. Consequently, a Caregiver Support Program official told us that the program office has only been able to assess workload problems and make interim adjustments, such as allowing VAMCs to request additional CSC positions, based on informal feedback and has not been able to conduct a formal re-assessment of the program that is based on comprehensive program data. A Caregiver Support Program official acknowledged that they recognize the need for a more capable, flexible system that can interface with other departmental systems. This official also told us that program office officials are working with their information technology office to develop the requirements for a comprehensive system and that they are exploring the possibility of whether an existing VHA system could be adapted to meet their needs. However, this official was not sure how long it would take to obtain another IT system and whether this effort would be displaced by higher priorities. As a result, it is not clear when program officials will have access to comprehensive workload data for the Family Caregiver Program to better assess how it is functioning. Although it will be difficult to identify changes needed to improve the program’s efficiency and effectiveness without these data, VAMCs’ workload problems will persist—and caregivers will not get the services they need—unless the program office begins taking steps towards identifying solutions. Family caregivers play a crucial role in caring for seriously injured post- 9/11 veterans by taking on critically important and often stressful responsibilities for their well-being and potentially keeping them out of costly institutions. The Family Caregiver Program was intended to provide supportive services to this caregiver population, but VHA significantly underestimated program demand. The subsequent stress on resources at some VAMCs resulted in delayed application decisions and home visits— ultimately limiting services to caregivers. Incremental steps to alleviate staffing shortfalls have benefited the program in some locations, but these efforts will not likely be sufficient in light of the steady growth of approved caregivers in the program. After 3 years of operation, it is clear that VHA needs to formally reassess and restructure key aspects of the Family Caregiver Program, which was designed to meet the needs of a much smaller population. This would include determining how best to ensure that staffing levels are sufficient to manage the local workload as well as determining whether the timelines and procedures for application processing and home visits are reasonable given the number of approved caregivers. To accomplish this, the Caregiver Support Program office will need to take a strategic, data-driven approach that would include an analysis of the program’s workload data at both the aggregate and VAMC levels. It will therefore be necessary for VHA’s Caregiver Support Program office to obtain an IT system that will facilitate access to the types of data— including interfacing with other VHA systems, such as systems for clinical patient records and respite care—that would allow it to more fully understand the program’s workload and its effect on VAMCs, CSCs, and caregivers. The current approach of relying on informal information channels limits the program office’s ability to comprehend the scope and magnitude of workload problems system-wide and leaves it in a reactive position of adding staff to the program only after significant workload problems have developed. A more capable IT system would enable the Caregiver Support Program to comprehensively monitor the program and proactively identify both actual and potential VAMC workload problems and target areas where improvements could be made. However, without a clear time frame for obtaining another IT system, workload issues will persist unless the Caregiver Support Program office starts to identify solutions to help alleviate VAMCs’ workload burdens, such as modifications to the timelines and procedures for application processing and home visits, and the identification of additional ways to provide staffing support. If the program’s workload problems are not addressed, the quality and scope of caregiver services, and ultimately the services that veterans receive, will continue to be compromised. To ensure that the Family Caregiver Program is able to meet caregivers’ demand for its services, we recommend that the Secretary of the Department of Veterans Affairs expedite the process for identifying and implementing an IT system that fully supports the program and will enable VHA program officials to comprehensively monitor the program’s workload, including data on the status of applications, appeals, home visits, and the use of other support services, such as respite care. We also recommend that the Secretary of the Department of Veterans Affairs direct the Undersecretary for Health to identify solutions in advance of obtaining a replacement IT system to help alleviate VAMCs’ workload burden, such as modifications to the program’s procedures and timelines, including those for application processing and home visits, as well as the identification of additional ways to provide staffing support, and use data from the IT system, once implemented, as well as other relevant data to formally reassess how key aspects of the program are structured and to identify and implement modifications as needed to ensure that the program is functioning as envisioned so that caregivers can receive the services they need in a timely manner. We provided a draft of this report to VA for review and comment. While the draft was at VA for comment, officials from VHA’s Management Review Service expressed concerns with two of the three recommendations in the draft report. Officials expressed concern that our recommendation to expedite the process for identifying and implementing an IT system for the Family Caregiver Program was directed to the Undersecretary for Health. They explained that obtaining a new IT system would require the involvement of multiple offices within VA, including central offices that are under the Secretary of VA. Based on this information, we revised our recommendation and have redirected it to the Secretary to ensure that it is inclusive of all necessary offices within the department. Officials also commented on our recommendation about using data from the new IT system to formally reassess key aspects of the program and make modifications as needed. They suggested broadening the recommendation to include data from other sources, such as any data they may obtain through the solutions they implement in advance of obtaining a new IT system. In consideration of this information and the fact that we refer to relevant data from other IT systems in our report, we modified our recommendation to state that the Undersecretary for Health should use data from the IT system, once implemented, as well as other relevant data to formally reassess how key aspects of the program are structured. As a result of these revisions, VA concurred with all three of our recommendations in its letter, which is reprinted in appendix II. VA also provided technical comments, which we incorporated as appropriate. In concurring with our third recommendation to use data from the IT system as well as other relevant data to reassess the program, VA did not mention using data from the new IT system as part of its evaluation. As a result, we are concerned that VA’s proposed actions only partially address this recommendation. Specifically, VA’s response focused on using relevant information from solutions developed in response to our second recommendation as well as other relevant data to formally reassess key aspects of the program. A VHA official explained that no one knows how long it will take to develop the new IT system, or how long it will be before data from the system are available, and as a result, VHA developed their response based on actions they knew they could accomplish. However, the substance of our recommendation is focused on using comprehensive workload data from the new IT system as the foundation of a data-driven program analysis. Without such data, VHA will not be positioned to make sound, well-informed decisions about the program, potentially allowing it to continue to struggle to meet the needs of the caregivers of seriously wounded and injured veterans. We are sending copies of this report to the Secretary of Veterans Affairs, appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of the report. GAO staff who made major contributions to this report are listed in appendix III. Veterans Affairs Medical Center (VAMC) Veterans Affairs Medical Center (VAMC) Veterans Affairs Medical Center (VAMC) Veterans Affairs Medical Center (VAMC) Veterans Affairs Medical Center (VAMC) Veterans Affairs Medical Center (VAMC) Total (140 VAMCs) In addition to the contact above, Bonnie Anderson, Assistant Director; Frederick Caison; Christine Davis; Cathy Hamann; Jacquelyn Hamilton; Giao N. Nguyen; and Chan-My J. Sondhelm made key contributions to this report.
In May 2010, Congress required VA to establish a program to support family caregivers of seriously injured post-9/11 veterans. In May 2011, VHA implemented its Family Caregiver Program at all VAMCs across the country, offering caregivers an array of services, including a monthly stipend, training, counseling, referral services, and expanded access to mental health and respite care. In fiscal year 2014, VHA obligated over $263 million for the program. GAO was asked to examine VA's implementation of the Family Caregiver Program. This report examines how VHA is implementing the program, including the types of issues that have been identified during initial implementation. GAO obtained and reviewed relevant policy documents and program data and interviewed officials from VHA's Caregiver Support Program office. GAO also met with officials from five VAMCs and their corresponding Veterans Integrated Service Networks to obtain information on program implementation at the medical facility level. The Veterans Health Administration (VHA)—within the Department of Veterans Affairs (VA)—significantly underestimated caregivers' demand for services when it implemented the Program of Comprehensive Assistance for Family Caregivers (Family Caregiver Program). As a result, some VA medical centers (VAMCs) had difficulties managing the larger-than-expected workload, and some caregivers experienced delays in approval determinations and in receiving program benefits. VHA officials originally estimated that about 4,000 caregivers would be approved for the program by September 30, 2014. However, by May 2014 about 15,600 caregivers had been approved—more than triple the original estimate. The program's staffing was based on VA's initial assumptions about the potential size of the program and consisted of placing a single caregiver support coordinator at each VAMC to administer the program. In addition, each VAMC was to provide clinical staff to carry out essential functions of the program, such as conducting medical assessments for eligibility and making home visits. This led to implementation problems at busy VAMCs that did not have sufficient staff to conduct these program functions in addition to their other duties. As a result, timelines for key program functions, such as those for completing applications within 45 days and making quarterly home visits to caregivers, are not being met. VHA has taken some steps to address staffing shortages; however, some VAMCs have not been able to overcome their workload problems because the program continues to grow at a steady rate—about 500 approved caregivers are being added to the program each month. Federal internal control standards emphasize the need for effective and efficient operations, including the use of agency resources. The Caregiver Support Program office, which manages the program, does not have ready access to the type of workload data that would allow it to routinely monitor the effects of the Family Caregiver Program on VAMCs' resources due to limitations with the program's information technology (IT) system—the Caregiver Application Tracker. Program officials explained that this system was designed to manage a much smaller program, and as a result, the system has limited capabilities. According to federal standards for internal control, agencies should identify, capture, and distribute information that permits officials to perform their duties efficiently. However, outside of obtaining basic aggregate program statistics, the program office is not able to readily retrieve data from the system that would allow it to better assess the scope and extent of workload problems at VAMCs. Program officials also expressed concern about the reliability of the system's data, which they must take steps to validate. The lack of ready access to comprehensive workload data impedes the program office's ability to monitor the program and identify workload problems or make modifications as needed. This runs counter to federal standards for internal control which state that agencies should monitor their performance over time and use the results to correct identified deficiencies and make improvements. Program officials told GAO that they have taken initial steps to obtain another IT system, but they are not sure how long it will take. However, unless the program office begins taking steps towards identifying solutions prior to obtaining a new system, VAMCs' workload problems will persist and caregivers will not be able to get the services they need. GAO recommends that VA (1) expedite the process for implementing a new IT system that will enable officials to obtain workload data; and that VHA (2) identify solutions to alleviate VAMCs' workload burden in advance of obtaining a new IT system, and (3) use data from the new IT system, once implemented, and other relevant data, to re-assess the program and implement changes as needed. VA agreed with GAO's recommendations.
The CCDF is the primary federal funding source to help states subsidize the cost of child care for low-income parents and to improve the quality of care. For a parent to be eligible for CCDF funds, their children must be younger than 13 years old and living with them; parents must be working, or enrolled in school or training. States may design their programs and establish work requirements, payment rates, family copayments, and other program rules within the broad parameters outlined by the federal law and regulations. States may add additional eligibility requirements, including different income thresholds, but must set the maximum family income eligibility requirement at or below 85 percent of the state median income for families of the same size. Table 1 shows the eligibility thresholds applicable to our fictitious families in the states we tested. Families may choose to purchase care from any legally operating child care provider, which may include child care centers, home-based providers, family members, neighbors, and after-school programs. Providers must be approved by the state to receive CCDF subsidies. HHS requires that states have licensing standards for child care providers, but federal law does not determine these standards or which type of providers they apply to. Some states require relative providers to undergo background checks with fingerprints, criminal and sex-offender checks, or home inspections, but other states have less stringent requirements. According to HHS, in 2008, 58 percent of children in the program were cared for in a licensed center-based child care facility, 13 percent were cared for in a licensed or regulated home-based center, 12 percent by an unregulated relative provider, and 17 percent in a variety of other arrangements. State and county CCDF agencies may pay child care providers or families directly. Payments to families may be in the form of a child care certificate that may be used only as payment or deposit for child care services. In some states, providers can directly bill the state through automated systems and have funds directly deposited into a personal bank account or receive a check by mail. In addition, families are required to contribute to the cost of care, in the form of a copayment, unless states exempt families below certain income thresholds from this requirement. CCDF rules also provide some guidance on establishing reimbursement rates for child care providers and require that a specified portion of funds be set aside for activities designed to enhance child care quality. The Child Care and Development Block Grant Act of 1990 first authorized block grants to be given to states for child care assistance and the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 further expanded the grants to states creating the current CCDF. In fiscal year 2009, $7 billion was expended for the CCDF block grants, of which $2 billion was attributable to the passage of the Recovery Act in 2009. CCDF has discretionary, mandatory, and matching components. In order to receive the matching component, a state must meet a number of spending requirements. programs from the Temporary Assistance for Needy Families (TANF) and Social Services Block Grants. Regarding Recovery Act funds provided to states for CCDF, as of September 3, 2010, HHS reported that it had disbursed to the states $1.2 billion of the $1.9 billion allo CCDF. In order to receive these funds, a state must: (1) provide matching funds at the state’s current Medicaid match rate; (2) obligate the federal and state share of matching funds in the year in which the matching funds are awarded; (3) obligate all of its mandatory funds in the fiscal year in which the mandatory funds are awarded; (4) obligate and expend its maintenance of effort (MOE) funds in the year in which the matching funds are awarded. (MOE means a state must continue to expend its own funds at the level it was matching the former Aid to Families with Dependent Children-linked child care programs in fiscal year 1994 or fiscal year 1995, whichever was greater.) reported that they had performed some activities to assess the exten which their programs were at risk of improper payments, but these activities often did not cover all payments that could be at risk. Since that report, HHS reported that it has engaged in several activities to help stat es continue to focus on improving their internal controls. For example, in response to the recommendations in our 2004 report, HHS organized, through the Child Care Bureau, a federal project team to draft an ap to address internal controls, using GAO’s report Internal Controls: Internal Control Management and Evaluation Tool as a guide. effort included drafting tools for states to use to conduct internal con self-assessments, estimates of payment error rates, and guidance fo r developing cost-benefit assessments of internal control processes; identifying and sharing best practices among states for minimizing improper payments; and taking actions to expand the system that matche state enrollment data across several programs to include CCDF. We did not review whether the states in our proactive tests and c steps to strengthen controls based on these initiatives. Our proactive testing revealed that CCDF programs in the 5 states we tested were vulnerable to fraud because states did not adequately verify the information of children, parents, and providers and lacked adequate controls to prevent fraudulent billing. In 7 of 10 cases in four states, ou r fictitious parents and children were admitted into the CCDF program because states did not verify the personal and employment information provided by the applicants. Three of those states paid $11,702 in childcare subsidies to our fraudulent providers, and two states allowed the providers to over bill for services beyond their approved limit. Only one state successfully prevented our fictitious applicants from being admitte into the program, but officials from that state told us they perform only limited background checks on providers and cannot immediately detect over b illing. Table 2 provides information about each of our undercover tests. GAO, Internal Control Management and Evaluation Tool, GAO-01-1008G (Washington, D.C.: August 2001). Parent was approved 3 weeks after completing a brief phone interview without requesting the applicant’s employer pay stubs. No identification documents were required for the parent or children. Department of Social and Health Services (DSHS) failed to detect the Social Security number (SSN) of a deceased individual used by the parent and the incorrect SSN used by the children. Officials told us that it is not a program requirement to verify SSNs for beneficiaries of the program. DSHS also did not perform any checks to determine that the parent’s employer was fictitious. DSHS initially denied the fictitious provider’s application because the provider’s name and SSN did not match. However, the provider was allowed to resubmit with the same personal information and a different Social Security card with a real SSN. Provider was approved within 12 days. Parent and provider were able to bill for hours exceeding the authorized amount by informing the caseworker that the parent had worked an additional 20 hours that month. No documentation was required. Parent was approved 12 days after completing a brief phone interview and submitting fabricated pay stubs and an employment verification letter. DSHS failed to detect the SSN of a deceased individual used by the parent and the incorrect SSNs used by the children. Officials told us that they are not required to verify beneficiaries’ SSNs. DSHS initially denied the fictitious provider’s application because the provider’s name and SSN did not match. Provider resubmitted using a different first name, SSN, driver’s license photograph and birth date, but the same last name, address and phone number. Even though DSHS had previously rejected a similar provider using the same address, and both applications claimed only one person lived there, DSHS failed to investigate the second application further. The application was approved within 7 days. Provider successfully billed for 16 school holidays for each school-aged child during a month of only 22 school days. DSHS officials stated that any overbilling of holiday hours would be caught during random audits that are conducted monthly on some providers. However, no audit was conducted in this case. Parent was approved for assistance after a 30 minute in- person interview where she presented photocopies of false Social Security cards, birth certificates, a driver’s license, and a death certificate for her spouse. Department of Social Services (DSS) failed to detect the SSN of a deceased person used by the parent and the incorrect SSNs used by the children. DSS accepted a fabricated letter as proof that the applicant did not receive survivor benefits for her deceased spouse. Provider passed the background check, even though she had submitted an SSN that did not match her name. This creates a risk that someone with a criminal background could steal an identity to qualify for child care payments. State officials told us they are not legally permitted to verify SSNs, even for relative child care providers. Provider received payment for 2 months of child care. We ceased proactive tests and returned assistance checks after media reports of county budget cuts in the child care assistance program. Parent was approved for assistance 4 weeks after applying by mail using photocopies of fraudulent Social Security cards, birth certificates, a utility bill, pay stubs, and a marriage certificate. Caseworker initially did not approve the application, which contained fraudulent Social Security cards showing the same SSN for the parent and one of her children. However, the caseworker accepted the parent’s explanation that the Social Security Administration had issued her the wrong Social Security card and approved her application when she submitted a card bearing a different SSN. Provider passed the background check, even though he had submitted the SSN of a deceased person. This creates a risk that someone with a criminal background could steal an identity to qualify for child care payments. State officials told us they are not legally permitted to verify SSNs, even for relative child care providers. County workers used an electronic system to prevent over billing by comparing hours billed to hours authorized and hours worked. In one instance, the provider claimed to have provided 140 hours of care to one child, but the parent’s pay stubs showed she had worked only 60 hours during that time. Caseworkers compared the billed hours to the pay stubs, detected the discrepancy and reduced the payment to the provider. However, they still permitted him to bill for an extra 2.5 hours a day that the parent supposedly spent at lunch or in transit, even though she worked just 10 minutes from the provider’s house. Department of Human Services (DHS) lost the fictitious parent’s initial applications on two occasions, one by fax and one by mail. Caseworker then discovered the faxed application 2 months later. On a third application attempt, parent was approved within 30 days after reapplying in person with photocopies of her driver’s license, Social Security cards, and pay stubs. Case worker did not ask to see original documents. DHS officials said they use state wage, employment, public assistance and child support databases to verify applicant information, and an SSA database to verify parent and child SSNs. Officials told us the system detected that the names and SSNs of our fictitious applicants did not match, but a caseworker inappropriately approved them for child care assistance. Provider was approved to receive payment for 3 months of child care assistance. State officials told us that a check was issued, but was returned to DHS due to an error at the rental mailbox store. Using the online billing system, provider attempted to bill for more hours than she was authorized to provide care. The system detected the discrepancy and successfully prevented payment for these hours. Department of Human Services lost the fictitious parent’s faxed application. When the parent resubmitted the application by mail, a caseworker initially claimed that it had not been processed, then said that the office’s mail was not being forwarded from the post office. The caseworker told her that if she reapplied in person, it would take 30 days to process the application. The applicant reapplied in person, submitting photocopied documents. Case worker did not interview the applicant or ask to see original documents. DHS denied the first application 4 months after it had originally been submitted and had no record of receiving our second application. However, the third application, which was submitted by the parent in person, was processed within 6 days. Applicant was denied because the identity of the parent and children could not be verified. Parent was approved for assistance after a 20 minute in- person interview where she presented photocopies of Social Security cards, pay stubs, birth certificates, and a driver’s license. Case worker did not ask to see original documents. Provider was originally rejected for using an SSN that did not match her name. Case workers accepted her explanation that the Social Security Administration had issued her the wrong Social Security card and allowed her to reapply with a new SSN. Provider passed a background check; however, licensing staff at the Texas Department of Family and Protective Services (DFPS) became suspicious of her multiple addresses and out- of-state driver’s license. The staff member requested that the provider appear in person with all original documentation, at which point we stopped our application. Parent completed a 15 minute in-person interview where she presented photocopies of Social Security cards, pay stubs, birth certificates, and a driver’s license. Case worker did not ask to see original documents. Texas Workforce Solutions (TWS) approved the parent for assistance, even though she had used the SSN of a deceased individual and her children had used incorrect SSNs. TWS also did not perform any checks to determine that the parent’s employer was fictitious. TWS officials told us that they do not have a system to verify the parent and children’s SSNs. DFPS had no record of receiving our initial provider application. Our second provider passed a background check and was approved 13 days after he submitted an application, but his operation was closed when DFPS could not reach him by phone. Parent’s initial application was denied 7 weeks after caseworkers compared parent and child information against state public assistance, wage and child support databases. Caseworkers also used Internet resources to identify the parent, provider, and employer addresses as nonexistent. Parent resubmitted the application using the same personal information with actual street addresses for her residence and employer. Provider also submitted a new application with an actual street address, valid SSN, and a fraudulent out-of-state driver’s license. New application was denied because caseworkers had put a warning on the case file alerting staff to the previously submitted fraudulent documentation. Illinois officials told us they do not currently require relative providers to undergo a sex offender check or a fingerprint background check, but they plan to implement these screenings in October 2010. In addition, they do not require parents to submit pay stubs on an ongoing basis, but rely on 6-month recertification to detect changes in work hours. This creates an opportunity for providers to over bill. Parent’s initial application was denied 3 months after she applied. Caseworkers compared parent and child information against state public assistance, wage and child support databases. Caseworkers also used Internet resources to identify the parent and provider’s home addresses as fictitious and the employer as nonexistent. Parent resubmitted the application using the same personal information with actual street addresses for her residence and employer. Provider also submitted a new application with a valid SSN, a nonexistent street address, and a fraudulent out- of-state driver’s license. Resubmitted application was denied when caseworkers read the case file notes, which documented previous problems with the parent and allowed the caseworker to identify discrepancies between the applications. Illinois officials told us they do not currently require relative providers to undergo a sex offender check or a fingerprint background check, but they plan to implement these screenings in October 2010. In addition, they do not require parents to submit pay stubs on an ongoing basis, but rely on 6-month recertification to detect changes in work hours. This creates an opportunity for providers to over bill. Several common themes emerged from our proactive testing, showing the specific vulnerabilities in the states’ CCDF programs. Lack of Effective Controls to Verify Parent and Child Information: Four states did not consistently verify the SSNs and addresses of our fictitious parents and children, potentially allowing unscrupulous providers to use nonexistent children to bill for additional subsidies. While HHS policy does not permit states to require that parents or children submit SSNs, all of the states we tested gave parents the option of submitting this information. However, we found that some states did not verify this information when it was provided. For example, Texas and New York did not verify our fictitious parent and children’s SSNs, which belonged to deceased people. Furthermore, 4 states accepted photocopies of the parent’s driver’s license, the children’s birth certificates, and all Social Security cards. While there is no federal requirement preventing states from accepting photocopies, they are much more difficult to identify as fraudulent than originals. In contrast to New York and Texas, Illinois, Michigan, and Washington compared information provided by the parent to data in state public assistance databases and, in Illinois and Michigan, state child support databases. In Michigan and Washington, caseworkers found that the applicants were not in these databases but conducted no further verification of their information. In one case, a Michigan caseworker also checked the applicants’ names and SSNs with the Social Security Administration (SSA), but inappropriately enrolled the family in the program even after the system identified their names and SSNs as mismatched. Michigan denied the other parent because they were not able to verify her identity or that of her children. Illinois denied both fraudulent applications after the public assistance and child support database matches found no record of the family, leading to further checks that identified other inconsistencies in the applications. Lack of Effective Controls to Verify Parent Income Eligibility: Four states lacked effective controls to verify the parent’s income by contacting the employer directly or comparing the parent’s income to state data, instead accepting fabricated pay stubs as proof of income. Without adequate verification of income, states cannot provide reasonable assurance that only eligible parents are accepted into the program. There is no federal requirement for what income documentation states must collect, but in all 5 states, parents were required to provide pay stubs and asked to declare other sources of income, such as Social Security, TANF, and/or child support. Caseworkers in New York, Texas, and Washington accepted photocopies of fabricated pay stubs from fictitious businesses and did not have effective controls to verify the existence of the employer, the validity of the company address, or the wages reported by the applicants. While officials in Michigan and Illinois told us they use state employment data to verify income at the time of application, only Illinois successfully prevented both our fictitious applicants from being accepted into the program. Illinois caseworkers said that when they saw that the applicant’s employer did not appear in the state database, they attempted to contact the business directly and were unable to reach a live employee. Caseworkers then verified the fictitious address we provided for the employer and discovered that it was in the middle of Lake Michigan, which caused them to deny the application. Lack of Effective Controls to Verify the Background of Relative Child Care Providers: The five states we tested did not conduct thorough provider background checks, generally failing to conduct nationwide background checks, verify SSNs, or compare provider information to sex offender registries. Michigan, Texas, and Washington conducted relative provider background checks using only state conviction data, creating the possibility that a provider with a criminal history in one state could be approved to care for children simply by moving to another state. New York and Illinois officials said they do not verify relative providers’ criminal background, instead matching provider information against state child abuse databases and, in New York, against the state sex offender registry. SSNs are a key element in the verification of a person’s identity, and all the states we tested required that providers submit their SSNs. However, there is no federal requirement that states verify SSNs. This creates the possibility that criminals, including registered sex offenders, using stolen identities could obtain federal subsidies to care for children. We found that New York did not verify our fictitious providers’ SSNs, approving two child care providers using SSNs that did not match their names, one of which belonged to a deceased person. Michigan officials told us that they also do not verify provider SSNs while officials in Illinois said they do not currently compare relative provider information to lists of registered sex offenders, potentially putting children at risk. However, officials said they plan to implement this screening in October 2010. While we did not test states that require fingerprinting, nationwide fingerprint background checks provide more assurance of an applicant’s identity than background checks without fingerprints. Our tests were limited to scenarios in which the provider was an unlicensed relative; therefore, our results cannot be applied to licensed child care providers, such as day care centers, which are typically subject to greater regulation. Lack of Effective Controls to Flag Suspicious Applications for Further Review: Three of four states did not have controls to flag fictitious parents and providers who reapplied to the program after their initial application had been identified as potentially fraudulent, creating the risk that applicants rejected for fraud will be able to gain admittance into the program simply by submitting slightly different information. In Washington, one fictitious provider’s application was initially rejected because a query of a federal database found that his name and SSN did not match. We then created a new fictitious provider that shared the same last name, mailing address, home address, and phone number as the rejected provider, but had a different first name, SSN, driver’s license photograph, and birth date. Even though the Department of Social and Health Services (DSHS) had previously rejected a similar provider using the same address, and both applications claimed only one person lived there, DSHS failed to investigate the second application further. Instead, the application was approved in 7 days. In the other Washington case and in one Texas case, caseworkers questioned why the provider’s SSN did not match her name but allowed her to submit a different SSN after she claimed that the SSA had issued her the wrong card. A caseworker in New York accepted the same explanation from a parent who initially used a Social Security card bearing the same SSN as her daughter. By contrast, after rejecting both of our parent applications as fraudulent, Illinois caseworkers added warnings to the case file notes to alert other staff to the previously identified fraud. When we submitted new applications with slightly different information, caseworkers linked the new applications to the rejected applications and prevented our fictitious parents from being approved for child care assistance. Weak Controls to Prevent Fraudulent Billing: All three states in which we tested billing procedures had some controls to prevent overbilling, but vulnerabilities in two states allowed providers to obtain payment for more hours than they were authorized to provide care. In Washington, the automated billing system prevented providers from claiming more than their authorized hours for regularly scheduled care, but allowed them to bill for additional “school holiday” hours without any documentation. Exploiting this vulnerability, both providers billed for excessive holiday hours; one provider successfully billed for 30 school holiday hours for each of the two children she cared for, even though she was not authorized to provide care during school hours. Furthermore, the same provider obtained payment for an additional 20 hours of care by having the parent tell her caseworker that she had worked extra hours that month. The caseworker did not require the parent to submit any documentation before authorizing the additional hours. New York required one parent to submit her pay stubs as proof of hours worked and used an electronic system to compare these to the hours billed by the provider. When our provider attempted to bill for 140 hours in April 2010, but the parent’s pay stubs showed she had worked only 60 hours, caseworkers identified the discrepancy and reduced the provider’s payment. However, they allowed the provider to bill for an extra 2.5 hours each day for time that the parent spent at lunch or in transit, even though she worked just 10 minutes from the provider’s house. Michigan used an automated system in which parents and providers separately reported the number of hours of care provided. The system compared these two reports to each other and to the hours of care authorized, detected that the provider had over billed by 5 hours every 2 weeks and reduced the payment to the authorized amount. We were not able to test controls over unauthorized billing in Illinois and Texas, but officials told us that parents are not required to submit pay stubs on an ongoing basis, and provider bills are compared only to the number of hours the provider is authorized to provide care. If a parent began working fewer hours but did not report the schedule change, a provider could continue to bill at the authorized level until 3 to 6 months later, when the parent submitted pay stubs as part of the recertification process. Delays in Processing Applications: In three counties in three states, 3 months or more elapsed between the date our fictitious applicants submitted their application and the date an agency first responded to the application. Parents applying to CCDF programs have a reasonable expectation of a timely decision on their applications. While there is no federal standard for timeliness, some states we tested established program standards for response times. For example, Michigan program requirements state that a decision be rendered within 45 days and Washington requires that a decision be made within 30 days. In our proactive tests, approval time frames for parent applicants ranged from the same day to over 4 months, with an average of 42 days to render a decision. In three cases, 3 months elapsed from the date a parent or provider submitted an application to the date the agency first responded. For example, one agency in Illinois received the parent’s application on September 3, 2009, but did not begin verifying the parent’s eligibility until November 16, 2009, finally issuing a denial letter on December 2, 2009. During this time, our undercover investigators repeatedly tried to call the agency, but frequently received a voicemail stating that calls would not be accepted due to the volume of calls and paperwork. Program officials told us that at the time of our application, that office was backlogged, but that normally, an applicant would receive a final determination within 45 days. Despite Michigan’s program standard of a response within 45 days, DHS denied our parent’s first application 4 months after it was originally submitted and lost her second application. However, her third application, which was submitted months later, was processed within 6 days. Michigan officials told us that at the time of our first two applications, both counties were in the process of adopting a new software program. The review process for one provider in Texas took 4 months. In the other case, the agency had no record of the first provider application and rendered a decision on the second within 13 days. By contrast, both counties in Washington approved parent and provider applications within 2 months of receipt. Table 3 shows the vulnerabilities we identified in our ten undercover tests of the CCDF programs by state. To prevent fraud, waste, and abuse, it is essential for states to have a well- designed system that includes preventive controls, detection and monitoring, and investigations, according to GAO’s fraud prevention model. Preventative controls in a program like CCDF would involve comparing applicant-provided information to government or third-party data and establishing controls to prevent the payment of unauthorized bills. Detection and monitoring would involve data mining for fraudulent applicant information and suspicious billing transactions, while investigation and prosecution of those caught committing fraud in the CCDF would serve as a deterrent to others contemplating defrauding the program. However, the CCDF regulations do not require states to implement specific measures to prevent or detect fraud, resulting in application processes and requirements that vary considerably by location. For example, in Washington, one of our fictitious parents was approved after giving a caseworker her information over the phone interview and having a woman, purporting to be her employer, call DSHS to verify her work schedule. In a New York county, the parent was required to provide photocopies of birth certificates and Social Security cards for all household members, a death certificate for the deceased spouse, pay stubs, a form signed by her landlord, a driver’s license, and a letter from SSA stating that she did not receive survivors’ benefits. Similar to our fictitious applications, we identified five closed criminal cases in which parents and providers defrauded the CCDF program. The parents and providers in these cases used similar methods to our proactive testing including falsifying documentation to claim eligibility, billing the state for fictitious children, and colluding to obtain payment for services that were never provided. Table 4 provides a detailed summary of cases in which individuals were convicted of fraudulently obtaining CCDF funds; a more detailed narrative on each case follows the table. Case 1: Two providers fraudulently billed Indiana for $150,310 worth of CCDF funds and operated a child care facility in a home where a convicted felon lived. According to the investigator’s report, the fraud began even before the pair opened their first home-based child care facility, when the provider who held the child care license (license holder) failed to disclose that her stepfather, a twice-convicted drug offender, lived in the home. The stepfather had access to CCDF funds through joint bank accounts with both providers, according to investigators, who also observed him transporting children for child care operations. A parent of one child enrolled in the child care home even told investigators that she was instructed not to mention the stepfather to investigators because “he wasn’t supposed to be there.” Once the child care home opened, the unlicensed provider (operator) led its day-to-day operations. As part of the scheme, she began requesting electronic access cards from parents of children enrolled at the child care home, according to the investigator’s report. These electronic access cards recorded the hours a child spent in a child care facility, allowing the operator to inflate the number of hours she billed the state for child care. Several parents interviewed by investigators acknowledged that they freely gave the operator their electronic access cards but one parent said the operator threatened to take away her spot if she did not provide the access card. Many parents acknowledged to investigators that the cards had been fraudulently swiped at times when care could not have been provided for a variety of reasons including the parent having a conflicting work schedule, the child having been moved to another child care facility, or the child being in another state at the time. While some parents claimed their electronic access cards were used without their knowledge, the investigator found that others were aware of the scheme or actively participating in it. For example, one parent provided her swipe card to the operator in exchange for cash payments every 2 weeks. Several parents also alleged that the provider mistreated their children, and the investigator observed that the child care facility was dirty and possibly unsafe. For example, several parents said that their children were not being fed all day, even though the child care home received subsidies from a federal nutrition program. Parents also alleged that their infants’ diapers were not changed, resulting in rashes. One parent claimed that the operator put her two children, ages 10 and 12, in a closet as punishment. In addition to the alleged mistreatment, several parents told the investigator that the operator attempted to extract additional payment from them by charging cash co-payments in excess of the amount permitted by the state, or by convincing them to buy her groceries using their electronic access cards to the Supplemental Nutrition Assistance Program (SNAP). About a year after they opened their first child care home, the license holder received a license for a second child care home. Like the first facility, the new child care home was run primarily by the operator with assistance from the license holder and provided additional opportunities for fraudulent billing. For example, the second facility was not always open during its regular hours and the two child care facilities did not have enough staff to cover all of the shifts for which they were supposedly open. When questioned by investigators, the operator claimed she was covering both sites by working “24 hours a day and 7 days a week.” The scheme was discovered when a caseworker noted that the operator’s own children were receiving subsidies to attend the child care home where she worked. Furthermore, several clients referred to the operator as the owner of the child care home, even though she was not licensed. In March 2009, the license holder pled guilty to welfare fraud, a class D felony, and the operator made the same plea in July 2009. Both were sentenced to 2 years in a state department of corrections jail. Case 2: The owner of a Missouri child care center fraudulently billed the state for over $112,242 in CCDF funds, neglected some children in her care, and operated a second unlicensed child care center from her home. Over 6 years, the child care center’s owner fraudulently obtained CCDF funds by falsifying invoices and attaching altered or forged attendance sheets as support for her claims, according to the plea agreement. For example, the owner forged parent signatures and attendance times, in some cases incorrectly spelling parents’ names on the attendance sheets. Several parents admitted to the investigator that they never signed any sign attendance sheets for the child care center but others said that when they first enrolled their children in the child care center, the owner had them sign blank attendance sheets. One of these parents signed a single attendance sheet her first month and the owner told her she would “take care of everything for (the parent) after that.” Using these falsified attendance sheets, the owner billed full time hours for children who attended part time and billed for children who never attended her child care center or children who had left the child care center. In addition to financial fraud, investigators uncovered instances of neglect during their interviews with parents. For example, one parent said she went to pick up her children at the child care center, but found them at the owner’s house locked in a car. Another parent said that when she arrived at the facility one day, no one could tell her where her daughter was. The parent found her daughter outside unattended. The investigators also found that the owner illegally billed the state for care provided at an unlicensed child care center that she ran out of her apartment. For example, one parent paid cash for child care provided at the unlicensed home, even though the owner billed for care supposedly provided to the same child at the licensed child care center. The parent confirmed that she was not eligible to receive CCDF funds and that her child was too young to be enrolled in the licensed facility. Furthermore, a former employee testified that she watched as many as 25 children at one time in the apartment. Investigators noted that the owner moved children from the licensed facility to her apartment in the evenings to avoid exceeding the required worker-to-child ratio at her licensed facility. The owner attempted to obstruct the investigation into her child care center, according to the investigator’s report. She encouraged parents to lie to investigators about the forged signatures, told one parent to ignore interview requests and harassed another parent by threatening to have her fired. Despite her attempts, the owner was charged and pled guilty to mail fraud in December 2008. She received 15 months in prison, and was required to pay $112,242 in restitution. Case 3: An Oregon couple created fictitious child care providers using falsified documents and stolen Social Security numbers in order to fraudulently obtain $122,616 in CCDF funds. The scam started when each parent claimed to be living at a separate address with the children, according to the investigator’s report. The husband applied for the CCDF program using a relative’s address in Lake Oswego, Oregon, even though he lived in Portland with his wife and children. The husband began receiving child care benefits in June 1998 and the wife began receiving separate benefits for the same children just one month later. Investigators later determined that the husband was the children’s caregiver, which should have disqualified the family from the program. To collect the CCDF funds, the couple created fictitious child care providers who supposedly cared for their children out of their private residences, according to the investigator’s report. Using a fraudulently obtained Social Security number, a fake name, and the address of a commercial mailbox store in Vancouver, Washington, the husband applied to become a child care provider. Once the fictitious provider was accepted into the program, the husband submitted a monthly bill showing the number of hours of care provided and used the mailbox to receive checks from the state, which he cashed using the fictitious provider’s identity. The couple created a different fictitious provider to collect the wife’s child care subsidies using the same method. In this case, however, the husband stole the Social Security number and birth date of his half brother, who lived in Washington and had never worked in Oregon. The investigators found that in order to maintain eligibility for the CCDF program, the husband falsified employment records to show that he was working. He provided pay stubs similar to those sold at office supply stores that showed his employer as “Ablazed and Mystifying Women,” which he described as a warehouse. In fact, there was no such business in Vancouver, no government agency had any records of the business, and the investigator was unable to find the building when she drove to the address. Furthermore, the husband’s alleged home address was a 30 minute drive from his supposed employer, but he had a suspended driver’s license and no vehicles registered in his name. However, he did have a driver’s license and three vehicles illegally registered in the name of one of the fictitious child care providers. A caseworker referred the husband for investigation because his employment records looked false, according to the investigator’s report. When the couple became aware of the investigation, they attempted to disrupt the investigation. The wife requested that the investigator not call her anymore and the husband claimed to have a series of family emergencies that prevented him from meeting with investigators. The father pled guilty to two counts of theft, five counts of identity theft, three counts of aggravated theft, and two counts of theft by deception and was then sentenced to 8 years in prison and $137,215 of restitution. The mother pled guilty to one count of unlawfully obtaining public funds, one count of forgery, two counts of identity theft, and two counts of theft by deception and was then sentenced to 3.5 years in prison. Case 4: Five women colluded to fraudulently bill Washington for $8,806 in CCDF funds and in some cases, assisted others in illegally obtaining child care licenses. For approximately 6 years, the five women received funds from the CCDF program and the food reimbursement program as child care providers, according to the indictment. The five women billed for fictitious children and children who attended school during the hours the women claimed to be caring for them. One woman billed for the same child twice, using the child’s real name and identifying information and also using a false name with the child’s real identifying information. Two of the women assisted others in applying for child care provider licenses using false identities even though the women had reason to believe the individuals were ineligible due to criminal history or immigration status, according to the indictment. One of the women provided one applicant with a fake name and Social Security number, and provided another applicant the name and Social Security number of one of the children under her care. Two other women provided fake references for the ineligible applicants’ false identities. After they became aware of an investigation into their activities, the five women met twice with a potential witness for the grand jury and tried to convince her to lie to investigators. A federal investigation led to an indictment by the U.S. Attorney’s Office in August 2007 charging all five women with conspiracy to make a false statement relating to a health care program and theft of public funds. Four of the women pled to conspiracy to make a false statement and one pled to theft of public funds. Each woman was sentenced to three years of probation along with restitution for the amount stolen from the program. Case 5: Two providers operating a large child care center in Wisconsin obtained over $360,000 in CCDF funds by colluding with parents to help them fraudulently qualify for child care assistance, then offered free housing and kickbacks to parents in exchange for enrolling their children in the child care center. One of the women held the child care license from the state, while the other operated the child care center. According to the prosecutor, the operator had previously run licensed child care centers, but these were closed and her license revoked due to improper billing. However, no charges were filed against her. Having lost her license, the operator went into business with her daughter, who obtained a child care license from the state in her name. Court transcripts note that at first, the child care center operated out of various private residences and served a small number of children. Later, it expanded into a new facility that was licensed to serve 64 children per shift, with three shifts each day. According to the complaint filed by the prosecutor, the operator solicited mothers to register their children at the child care center, particularly those with large numbers of children. She then charged the state for full- time care for each child, about $200 per week, even though some children did not attend the child care center at all and others only attended sporadically. To help these women meet the eligibility requirements for the CCDF program, she forged documentation showing that they were employed at her child care center. Some of these women never worked at the child care center while others worked for a limited period of time. In at least one case, the operator used CCDF funds to pay a kickback to one woman whose children were enrolled at the center. To generate more income, the prosecutor said that the operator and license holder used funds from the CCDF program to buy several rental properties, which the operator offered as rent-free housing to parents who enrolled their children in her child care center. As part of this agreement, the operator provided the parents with proof of employment that allowed them to apply for rental assistance from the city of Milwaukee. According to the prosecutor, the operator received the rental subsidy for those families accepted into the program and did not charge them any rent, a violation of the rental subsidy program’s policy. In April 2009, the operator pled guilty to theft by fraud and was sentenced to 5 years in jail followed by 12 years of extended supervision and $300,000 in restitution payments to the state. The license holder pled guilty to a computer crime for giving her mother access to the child care billing system and was sentenced to 30 days at the house of correction followed by probation. During the daughter’s sentencing, the judge noted that the program “was a joke with little if any oversight.” Not all eligible families that want to receive CCDF assistance are currently able to receive it for a variety of reasons, and fraud and abuse in the program may further reduce the availability of CCDF funds. This lack of child care assistance forces some families to cut back on spending for daily needs, working hours, and education. Of the 41 waitlisted parents we interviewed, 16 described multiple hardships—facing financial difficulties, quitting their job or education program, and worrying about negative impacts on their children’s development. Twenty-four parents said they had budget problems which forced some to cut back on spending for daily essentials such as groceries, clothing, gasoline, electricity, car payments, health insurance, or their child’s lunch money. Some parents told us they had taken out loans or depleted family savings to pay for child care. Twenty-four parents reported problems maintaining stable employment or enrollment in education, in some cases having to turn down or quit jobs that did not pay enough to cover child care expenses. Nine parents also told us that a lack of child care caused a variety of negative effects on their children, including one parent who reported having to send her child to an unlicensed relative who does not offer the educational activities that a high-quality child care center might. Another parent said that her two developmentally disabled children would receive better care at a child care center than they currently receive from their elderly grandparents. See table 5 for the experiences of 10 applicants we contacted. These applicants expressed issues common to many of the parents we interviewed. We did not attempt to verify the applicants’ statements. According to state program administrators, Recovery Act funds for the CCDF program have enabled some states to reduce or eliminate waiting lists or expand CCDF eligibility to cover additional families, but many eligible families remain without child care assistance. States’ CCDF implementation plans for fiscal years 2008 to 2009 identified 25 states that have processes to maintain some type of waiting list when demand exceeds available funds. We contacted 11 states that had active waiting lists between November 2009 and March 2010. Mississippi used Recovery Act funds to process all of the children on their waiting list, eliminating 7,000 individuals from the waiting list in April 2009. Arkansas also eliminated its waiting list but later reinstated it due to an increase in new applications. In Florida, officials used Recovery Act funding to allow children to stay in the program for longer periods. In North Carolina, officials told us Recovery Act funds allowed them to respond to the state’s high unemployment rate by expanding eligibility to unemployed parents who needed child care to search for a job. However, California told us that Recovery Act funds have not had a noticeable impact on the size of its waiting list of 134,880 families. Furthermore, New Hampshire had to institute a waiting list due to the increase in demand, though program officials acknowledged that Recovery Act funds delayed the implementation of the waiting list. In one of the counties we tested in New York, cuts to the child care budget during our undercover tests eliminated 1,091 children from the program. However, due to the low-income fraudulently reported by the parents, their fictitious children would have continued to receive assistance if we had not withdrawn them from the program. Between July 16 and September 10, 2010, we briefed HHS officials and CCDF program officials in Illinois, Michigan, New York, Texas, and Washington on the results of our work. We suggested a number of actions that HHS and states should consider to reduce fraud and abuse in CCDF programs, including: Require applicants, household members, and providers to submit Social Security numbers in order to receive child care assistance. Evaluate the feasibility of validating applicant and family member identity information with SSA. Establish more stringent verification requirements for eligibility, including validating applicant-provided information using state databases of wage, employment, public assistance, and child support information; contacting employers directly to verify employment; and using Internet resources to verify address information. Implement a system to alert staff to child care applications previously rejected for fraud to prevent the applicant from resubmitting in the same county or another county. Evaluate the feasibility of requiring all providers, including relative providers, to undergo national fingerprint criminal history checks and screenings against the national sex offender registry and state child abuse databases. Establish more stringent verification of bills submitted by child care providers, including requiring program staff to verify that the number of reported hours of child care correspond to the number of hours worked by the parent; denying unsupported claims for extra hours worked; and restricting the number of hours that a provider can bill over the authorized amount without documentation. Each of the states we tested already had some of these controls in place or has plans to implement some of them in the near future. For example, Illinois, Washington, and Michigan conduct some verification of applicant information against state child support and public assistance databases. Several counties in New York use an electronic system to compare hours worked on parents’ pay stubs to the hours billed by providers, and officials told us they plan to put the system into use state wide. Texas officials told us that they intend to implement a new, electronic billing system that will include controls to prevent over billing. However, in some cases, officials cited concerns about the cost and legal implications of increased verification. For example, Texas expressed concern that conducting fingerprint criminal history checks on providers would impose additional costs on the program. Some states noted that they are not permitted to require parents to submit SSNs, while New York state officials told us they do not have the legal authority to verify SSNs submitted by parents or providers. HHS did not respond to issues surrounding the collection and verification of SSNs at the time of our briefing. Recognizing that preventing fraud often involves additional costs, some of our suggested corrective actions allow for HHS and states to evaluate the feasibility of control activities. Responding to our findings, HHS commented that they have recently taken actions to address issues of CCDF integrity. For example, HHS officials said that program guidance issued in August 2010 discussed recommended documentation and verification procedures, including data matching with wage and employment databases; data matching with other public assistance databases; and background checks and training for providers. The guidance also highlighted on-site visits to providers to review attendance and enrollment records. Officials noted that an electronic system, the Public Assistance Reporting Information System, uses an SSN match across states to identify red flags of individuals enrolled in benefit programs in multiple states. Only eight states currently use this system for CCDF, but officials said the August 2010 program guidance encouraged more states to join. In addition, officials said that HHS has an ongoing conference call series on program integrity, which has covered promising practices on how to use data mining and automated reports to highlight cases that need further scrutiny. HHS officials also commented on upcoming initiatives related to fraud prevention. For example, officials said that state CCDF applications for fiscal years 2012-2013 will have a stronger focus on integrity, including questions on the verification of eligibility information, and procedures for identifying, investigating, and recovering fraudulent payments. In the coming year, HHS’s Self-Assessment Instrument for Internal Controls & Risk Management will be revised and piloted in more states, and will include fiscal and program evaluation and stricter controls to prevent over billing. We are sending copies of this report to the Secretary of Health and Human Services and the child care program offices of Illinois, Michigan, New York, Texas, and Washington. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-6722 or kutzg@gao.gov if you have any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. To proactively test selected states’ fraud prevention controls, we identified 26 states that received more than $100 million from the Child Care and Development Fund (CCDF) for fiscal year 2009. From these states, we identified six that did not require providers to be fingerprinted or undergo site visits and selected the five states receiving the most American Recovery and Reinvestment Act (Recovery Act) funding. We focused on these criteria because fingerprint checks could have identified our investigators as federal employees and site visits would have required us to maintain a physical address for our fictitious child care operator. In addition, we selected counties within the five states that contained large cities, where possible, and did not have waiting lists for assistance to ensure that we did not prevent real families from obtaining assistance. We tested two counties in Illinois, Michigan, New York, Texas, and Washington to develop our 10 undercover tests. Though all types of providers are potentially able to commit fraud, we chose to test cases in which a relative is paid to provide care because these providers are generally subject to less regulation than larger child care centers. Relative providers cared for 12 percent of children in the CCDF program in 2008, while other types of providers accounted for the rest. As such, our results cannot be applied to licensed child care providers, such as child care centers, nor can our results be projected to all state CCDF programs. We used commercially available hardware and software to counterfeit identification and employment documents for bogus parents, children, and providers. Once accepted, we billed the program for care provided to the fictitious children. We provided fraudulent pay stubs showing the hours worked by parents and reported those hours through automated systems or completed invoices we submitted to the state or county for payment. We received several child care assistance checks, which we did not cash and returned to program officials at the end of our investigation. During our tests, media reports indicated that the child care assistance budget in one county had been reduced. We immediately ended our undercover test in this county and returned the voided check. To select our case studies, we identified criminal convictions of child care assistance fraud nationwide using online databases and internet resources to identify closed cases. As part of the selection process, we focused on cases involving a high dollar amount of fraud or containing other elements of fraud such as stolen identities. Ultimately, we selected 5 cases from Indiana, Missouri, Oregon, Washington, and Wisconsin. We reviewed applicable court documents for each case. When possible, we also interviewed investigators and prosecutors in charge of the selected cases to obtain additional case details. To examine the impact of being unable to obtain child care on low-income parents, we contacted officials in states and counties with active child care assistance waiting lists to obtain names of parents currently eligible for child care assistance. States’ CCDF implementation plans for fiscal year 2008 to 2009 identified 25 states that have processes to maintain some type of waiting list when demand exceeds available funds. However, these waiting lists fluctuate over time and we could not identify a centralized list of all states with active waiting lists. We confirmed active waiting lists in 11 of the states from November 2009 through March 2010. Of these 11 states, Alabama, Indiana, Massachusetts, Minnesota, North Carolina, and Texas provided us with contact information for individuals on state waiting lists, which we used to contact parents directly. Starting from the top, we selected a nonrepresentative sample of 166 parents to contact and interviewed the 41 who responded to our inquiries. We collected information such as the number and ages of children and the parents’ job title and income to determine demographic information. However, we did not attempt to verify the accuracy of the information that they provided to us. Our results are not representative of the entire population of families currently in need of child care assistance. In addition to the individual named above, the following individuals made key contributions to this report: Cindy Brown-Barnes, Assistant Director; Andrew O’Connell, Assistant Director; Joshua Bartzen; Gary Bianchi; Eric Charles; Grant Fleming; Matthew D Harris; Christine Hodakievic; Aaron Holling; Jason Kelly; Barbara Lewis; Jeffrey McDermott; Andrew McIntosh; Sandra Moore; George Ogilvie; Kimberly Perteet; Gloria Proa; Lerone Reid; Ramon Rodriguez; Timothy Walker; and John Wilbur.
Through the Child Care and Development Fund (CCDF), the U.S. Department of Health and Human Services (HHS) subsidizes child care for low-income families whose parents work or attend education or training programs. In fiscal year 2009, the CCDF budget was $7 billion. States are responsible for determining program priorities and overseeing funds. Providers--who range from child care centers to relatives--bill the state for caring for approved children. Unregulated relatives represent 12 percent of providers in the CCDF program. In response to program fraud and abuse, GAO (1) proactively tested selected states' fraud prevention controls, (2) examined closed case studies of fraud and abuse, and (3) interviewed parents waitlisted for child care about the effect of this lack of assistance on their families. To do this, GAO investigators posed as parents and unregulated relative providers in 10 scenarios in five states with no waiting lists that each received more than $100 million in CCDF funding for fiscal year 2009. These states did not require fingerprint criminal history checks or site visits. For case studies of past program fraud, GAO reviewed criminal court records and interviewed agency officials. GAO spoke with parents on waiting lists in six states for their perspectives on the effect of being unable to obtain childcare. Results cannot be projected beyond these states or unregulated relative providers. The five states GAO tested lacked controls over child care assistance application and billing processes for unregulated relative providers, leaving the program vulnerable to fraud and abuse. Posing as fictitious parents and relative providers, GAO successfully billed for $11,702 in child care assistance for fictitious children and parents. In most cases, states approved GAO's fictitious parents who used Social Security numbers of deceased individuals and claimed to work at nonexistent companies. One state also approved a fictitious child care provider with a deceased person's Social Security number, creating the possibility that a criminal using a stolen identity could obtain federal subsidies to care for children. In two other states, GAO successfully billed for hours exceeding those authorized without submitting proof of additional hours worked. One state successfully prevented both fictitious applicants from being accepted, but had weak payment controls. GAO identified five recent closed criminal cases in which parents and providers defrauded the CCDF program. These cases involved parents falsifying eligibility documentation, providers billing states for fictitious children, and collusion between parents and providers to obtain payment for services that were never provided. Fraudulent payments reduce program funds available for eligible parents who depend on child care assistance to maintain employment or attend education programs. In some states, waiting lists are 1 to 2 years long. Parents on waiting lists said that without child care, they contend with multiple hardships--facing financial difficulties, quitting their job or education program, and worrying about negative effects on their children's development. In response, many of the states tested noted that they have plans to implement new controls, but expressed concern about associated cost and legal implications. HHS officials commented they have recently taken actions to address issues of CCDF integrity, including issuing program guidance on verification procedures and conducting conference calls on program integrity.
The federal government and the states share responsibilities for financing and administering Medicaid. As a result of flexibility in the program’s design, Medicaid consists of 56 distinct state-based programs. The challenges inherent in overseeing a program of Medicaid’s size and diversity make the program vulnerable to inappropriate program spending. CMS is responsible for overseeing state Medicaid programs. For example, CMS is responsible for ensuring that states’ capitated managed care payments meet actuarial soundness requirements, that supplemental payments are appropriate, and for supporting and overseeing state program integrity activities—activities intended to address Medicaid fraud, waste, and abuse. Managed care is a significant component of the Medicaid program, with nearly half of all Medicaid enrollees—approximately 20.7 million individuals—enrolled in capitated managed care in 2008. In 2007, there was a total of over $62 billion in federal and state spending for managed care. Under managed care, states use capitation payments to prospectively pay health plans to provide or arrange for services for Medicaid enrollees. Such capitation payments are required by federal law to be actuarially sound. CMS regulations, first issued in 2002, define actuarially sound rates as those that are (1) developed in accordance with generally accepted actuarial principles and practices, (2) appropriate for the populations to be covered and the services to be furnished, and (3) certified as meeting applicable regulatory requirements by qualified actuaries. In order to receive federal funds for their managed care programs, states must submit documentation to CMS regional offices for review, including a description of their rate-setting methodology and data used to set rates. This review, completed by CMS regional office staff, is designed to ensure that a state complies with the regulatory requirements for setting actuarially sound rates. Most state Medicaid programs make supplemental payments to certain providers in addition to the standard payments states make to these providers for Medicaid services. For purposes of this testimony, we have grouped supplemental payments into two broad categories: (1) Disproportionate Share Hospital (DSH) payments, which states are required to make to hospitals that treat large numbers of low-income uninsured people and Medicaid patients; and (2) non-DSH supplemental payments, which are not required by statute or regulation. In fiscal year 2010, states made more than $31 billion in supplemental payments; the federal share was more than $19 billion. CMS is responsible for overseeing these payment arrangements to ensure the propriety of expenditures for which states seek federal reimbursement, including whether states were appropriately financing their share. Program integrity activities are designed to prevent, or detect and recover, improper payments throughout the Medicaid program. The Deficit Reduction Act of 2005 expanded CMS’s role regarding Medicaid program integrity, establishing the Medicaid Integrity Program to provide effective federal support and assistance to states to combat fraud, waste, and abuse. CMS’s core program integrity activities include: National Provider Audit Program—a program through which separate CMS contractors analyze claims data to identify aberrant claims and potential billing vulnerabilities, and conduct postpayment audits based on data analysis leads in order to identify overpayments to Medicaid providers. Comprehensive program integrity reviews—comprehensive management reviews that are conducted every 3 years to assess the effectiveness of each state’s program integrity efforts and determine whether the state’s policies and procedures comply with federal law and regulations. State program integrity assessments—annual assessments in which CMS collects data on state Medicaid integrity activities—including program integrity staffing and expenditures, audits, fraud referrals, and recoveries—for the purposes of program evaluation and technical assistance support. CMS also provides training and technical assistance to states. For example, CMS’s Medicaid Integrity Institute is the first national Medicaid integrity training program and offers state officials training and opportunities to develop relationships with program integrity staff from other states. We found that CMS had not ensured that all states were complying with the actuarial soundness requirements and did not have sufficient efforts in place to ensure that states were using reliable data to set managed care rates. Specifically, in August 2010, we reported that there were significant gaps in CMS’s oversight of 2 of the 26 states included in our review. First, CMS had not reviewed one state’s rate setting for multiple years and only determined that the state was not in compliance with the requirements through the course of our work. Second, at the time of our work, CMS had not completed a full review of a second state’s rate setting since the actuarial soundness requirements became effective in August 2002, and therefore may have provided federal funds for managed care rates that were not in compliance with all of the requirements. In addition to these gaps in oversight, we found inconsistencies in the reviews CMS completed. For example, the extent to which CMS ensured state compliance with some of the actuarial soundness requirements was unclear because CMS officials did not always document their review or cite evidence of the state’s compliance. When officials did cite evidence, the evidence did not always appear to meet the actuarial soundness requirements. Variation in practices across CMS regional offices contributed to these gaps and other inconsistencies in the agency’s oversight of states’ rate setting. For example, regional offices varied in the extent to which they tracked state compliance with the actuarial soundness requirements, their interpretations of how extensive a review of a state’s rate setting was needed, and their determinations regarding sufficient evidence for meeting the actuarial soundness requirements. We also reported in 2010 that CMS’s efforts to ensure the quality of the data used to set rates were generally limited to requiring assurances from states and health plans—efforts that did not provide the agency with enough information to ensure the quality of the data used. CMS regulations require states to describe the data used as the basis for rates and provide assurances from their actuaries that the data were appropriate for rate setting. The regulations do not include requirements for the type, amount, or age of the data used to set rates, and states are not required to report to CMS on the quality of the data. When reviewing states’ descriptions of the data used to set rates, CMS officials focused primarily on the appropriateness of the data rather than their reliability. Additionally, we found that actuarial certification does not ensure that the data used to set rates are reliable. In particular, our review of rate-setting documentation found that some actuaries’ certifications included a disclaimer that if the data used were incomplete or inaccurate then the rates would need to be revised. Furthermore, some actuaries noted that they did not audit or independently verify the data and relied on the state or health plans to ensure that the data were accurate and complete. With limited information on data quality, CMS cannot ensure that states’ managed care rates are appropriate, which places billions of federal and state dollars at risk for misspending. States and other sources have information on the quality of data used for rate setting—information that CMS could obtain. In addition, CMS could conduct or require periodic audits of data used to set rates; CMS is required to conduct such audits for the Medicare managed care program. CMS took a number of steps that may address some of the variation that contributed to inconsistent oversight, such as requiring regional office officials to use a detailed checklist when reviewing states’ rate setting; use of the checklist had previously been optional. However, we found variations in CMS oversight even when the checklist was used. Thus, to improve oversight of states’ Medicaid managed care rate setting, we recommended that CMS (1) implement a mechanism for tracking state compliance, including tracking the effective dates of approved rates; (2) clarify guidance for CMS officials on conducting rate-setting reviews, such as identifying what evidence is sufficient to demonstrate state compliance with the actuarial soundness requirements, and how officials should document their reviews; and (3) make use of information on data quality in overseeing states’ rate setting. HHS agreed with these recommendations, and as of June 2011, CMS officials indicated they were investigating ways to create an easily accessible database to help them more closely monitor the status of rate-setting approvals, reviewing and updating its guidance, and looking into incorporating information about data quality into its review and approval of Medicaid managed care rates. In our prior work, we have reported on varied financing arrangements involving supplemental payments that shifted costs from the states to the federal government. In some cases, the providers did not retain the full amount of the payments as some states required providers to return most, or all, of the supplemental payment to the state. Our work found that while a variety of federal legislative and CMS actions have helped curb inappropriate financing arrangements, gaps in oversight remain. Because such financing arrangements effectively increased the federal Medicaid share, they could compromise the fiscal integrity of Medicaid’s federal and state partnership. Our most recent reports on supplemental payments underscore these gaps in federal oversight. In May 2008, we reported that CMS had not reviewed all supplemental payment arrangements to ensure that these payments were appropriate and used for Medicaid purposes. In November 2009, we found that ongoing federal oversight of supplemental payments was warranted, in part, because two of the four states reviewed did not comply with federal requirements to account for all Medicaid payments when calculating DSH payment limits for uncompensated hospital care. Recently implemented requirements have the potential to improve oversight of some supplemental payments, but concerns about other payments remain. and accountability requirements in place for DSH payments. However, these requirements are not in place for non-DSH supplemental payments, which may be increasing. Specifically, in 2006, states reported making $6.3 billion in non-DSH supplemental Medicaid payments, of which the federal share was $3.7 billion, but not all states were reporting their payments. By 2010, this amount had grown to $14 billion, with a federal share of $9.6 billion. However, according to CMS officials, states’ reporting of non-DSH supplemental payments was likely incomplete. For example, there are now improved transparency As a result of our prior work, we have made numerous recommendations aimed at improving federal oversight of supplemental payments. Some key recommendations we made have not been implemented by CMS. We have recommended that CMS adopt transparency requirements for non- DSH supplemental payments and develop a strategy to ensure all state supplemental payment arrangements have been reviewed by CMS. CMS has taken some action to address some of these recommendations but we continue to believe additional action is warranted. CMS has raised concern that congressional action may be necessary to fully address our concerns. Additionally, given continued concerns associated with Medicaid supplemental payments, we have work under way related to states’ reporting and CMS’s oversight of DSH and non-DSH supplemental payments. See the Medicare Prescription Drug, Improvement, and Modernization Act of 2003, Pub. L. No. 108-173, § 1001(d), 117 Stat. 2066, 2430-2431 (2003) (codified, as amended, at 42 U.S.C. § 1395r-4(j)) and Medicaid Program, Disproportionate Share Hospital Payments, Final Rule, 73 Fed. Reg. 77,904 (Dec. 19, 2008). In December 2011, we testified that the key challenge CMS faced in implementing the statutorily established federal Medicaid Integrity Program was ensuring effective coordination to avoid duplicating state program integrity efforts, particularly in the area of auditing provider claims. At the outset of the Medicaid Integrity Program, CMS stressed the need for effective coordination and acknowledged the potential for duplication with states’ ongoing efforts to identify Medicaid overpayments. However, the National Provider Audit Program results—the largest component of the Medicaid Integrity Program—call into question the effectiveness of CMS’s communication, and its ability to avoid duplication with state audit programs. After examining CMS’s program expenditures, we found that overpayments identified by its audit contractors since fiscal year 2009 were not commensurate with its contractors’ costs. From fiscal years 2009 through 2011, CMS authorized 1,663 provider audits in 44 states. However, CMS’s reported return on investment from these audits was negative. While its contractors identified $15.2 million in overpayments in fiscal year 2010, the combined cost of the National Provider Audit Program was about $36 million. In addition, CMS reported in 2011 that it was redesigning the National Provider Audit Program to achieve better results. Data limitations—in particular, the use of summary data that states submit to CMS on a quarterly basis—may have hampered the contractors’ ability to identify improper claims beyond what states already identified. It remains to be seen, however, whether CMS’s redesign of the National Provider Audit Program will result in an increase in identified overpayments. CMS’s other core oversight activities—triennial comprehensive state program integrity reviews and annual assessments—are broad in scope and were conceived to provide a basis for the development of appropriate technical assistance. However, we found that much of the information collected from the annual assessments duplicated information collected during triennial reviews. Further, our review of a sample of assessments revealed missing data and a few implausible measures, such as one state reporting over 38 million managed care enrollees. Improved data collection activities and dialogue with states will help CMS ensure that it has complete and reliable state information on which to direct its training and technical assistance resources appropriately. Finally, we found that the Medicaid Integrity Institute appears to promote effective state coordination and collaboration. We reported that states have uniformly praised the institute and a special June 2011 session brought together Medicaid program integrity officials and representatives of Medicaid Fraud Control Units—independent state units responsible for investigating and prosecuting Medicaid fraud—in 39 states to improve working relations between these important partners. As we testified in December 2011, CMS’s expanded role in ensuring Medicaid program integrity has presented both challenges to and opportunities for assisting states with their activities to ensure proper payments. We have ongoing work reviewing CMS’s Medicaid program integrity activities that will provide additional information about CMS’s oversight efforts in this area. Chairmen Gowdy and Jordan, this concludes by prepared statement. I would be happy to answer any questions that you or other Members may have. For further information about this statement, please contact Carolyn L. Yocom at (202) 512-7114 or yocomc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Michelle B. Rosenberg, Assistant Director; Eagan Kemp; Drew Long; Peter Mangano; Christina Ritchie; and Hemi Tewarson were key contributors to this statement. Medicaid Program Integrity: Expanded Federal Role Presents Challenges to and Opportunities for Assisting States. GAO-12-288T. Washington, D.C.: December 7, 2011. Fraud Detection Systems: Additional Actions Needed to Support Program Integrity Efforts at Centers for Medicare and Medicaid Services. GAO-11-822T. Washington, D.C.: July 12, 2011. Fraud Detection Systems: Centers for Medicare and Medicaid Services Needs to Ensure More Widespread Use. GAO-11-475. Washington, D.C.: June 30, 2011. Improper Payments: Recent Efforts to Address Improper Payments and Remaining Challenges. GAO-11-575T. Washington, D.C.: April 15, 2011. Medicare and Medicaid Fraud, Waste, and Abuse: Effective Implementation of Recent Laws and Agency Actions Could Help Reduce Improper Payments. GAO-11-409T. Washington, D.C.: March 9, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. Medicaid Managed Care: CMS’s Oversight of States’ Rate Setting Needs Improvement. GAO-10-810. Washington D.C.: August 4, 2010. Medicaid: Ongoing Federal Oversight of Payments to Offset Uncompensated Hospital Care Costs Is Warranted. GAO-10-69. Washington D.C.: November 20, 2009. Medicaid: Fraud and Abuse Related to Controlled Substances Identified in Selected States. GAO-09-1004T. Washington, D.C.: September 30, 2009. Medicaid: Fraud and Abuse Related to Controlled Substances Identified in Selected States. GAO-09-957. Washington, D.C.: September 9, 2009. Improper Payments: Progress Made but Challenges Remain in Estimating and Reducing Improper Payments. GAO-09-628T. Washington, D.C.: April 22, 2009. Medicaid: CMS Needs More Information on the Billions of Dollars Spent on Supplemental Payments. GAO-08-614. Washington D.C.: May 30, 2008. Medicaid Financing: Long-standing Concerns about Inappropriate State Arrangements Support Need for Improved Federal Oversight. GAO-08-650T. Washington D.C.: April 3, 2008. Medicaid Demonstration Waivers: Recent HHS Approvals Continue to Raise Cost and Oversight Concerns. GAO-08-87. Washington, D.C.: January 31, 2008. Medicaid Financing: Long-standing Concerns about Inappropriate State Arrangements Support Need for Improved Federal Oversight. GAO-08-255T. Washington D.C.: November 1, 2007. Medicaid Financing: Federal Oversight Initiative Is Consistent with Medicaid Payment Principles but Needs Greater Transparency. GAO-07-214. Washington D.C.: March 30, 2007. Medicaid Financial Management: Steps Taken to Improve Federal Oversight but Other Actions Needed to Sustain Efforts. GAO-06-705. Washington D.C.: June 22, 2006. Medicaid Integrity: Implementation of New Program Provides Opportunities for Federal Leadership to Combat Fraud, Waste, and Abuse. GAO-06-578T. Washington, D.C.: March 28, 2006. Medicaid Financing: States’ Use of Contingency-Fee Consultants to Maximize Federal Reimbursements Highlights Need for Improved Federal Oversight. GAO-05-748. Washington, D.C.: June 28, 2005. Medicaid Fraud and Abuse: CMS’s Commitment to Helping States Safeguard Program Dollars Is Limited. GAO-05-855T. Washington, D.C.: June 28, 2005. Medicaid Program Integrity: State and Federal Efforts to Prevent and Detect Improper Payments. GAO-04-707. Washington, D.C.: July 16, 2004. Medicaid: State Efforts to Control Improper Payments. GAO-01-662. Washington, D.C.: June 7, 2001. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Medicaid, a joint federal-state health care program, financed care for about 67 million people at a cost of $401 billion in fiscal year 2010. At the federal level, CMS, an agency within the Department of Health and Human Services, is responsible for overseeing the design and operations of states’ Medicaid programs, while the states administer their respective programs’ day-to-day operations. The shared financing arrangement between the federal government and the states presents challenges for program oversight and Medicaid has been on GAO’s list of high-risk programs since 2003, in part, because of concerns about the fiscal management of the program. Our prior work has shown that CMS continues to face challenges overseeing the Medicaid program. Oversight of managed care rate-setting has been inconsistent. In August 2010, GAO reported that the Centers for Medicare & Medicaid Services (CMS) had not ensured that all states were complying with the managed care actuarial soundness requirements that rates be developed in accordance with actuarial principles, appropriate for the population and services, and certified by actuaries. For example, GAO found significant gaps in CMS’s oversight of 2 of the 26 states reviewed—CMS had not reviewed one state’s rates in multiple years and had not completed a full review of another state’s rates since the actuarial soundness requirements became effective. Variation in practices across CMS regional offices contributed to these gaps and other inconsistencies in the agency’s oversight of states’ rate setting. GAO’s previous work also found that CMS’s efforts to ensure the quality of the data used to set rates were generally limited to requiring assurances from states and health plans—efforts that did not provide the agency with enough information to ensure the quality of the data used. With limited information on data quality, CMS cannot ensure that states’ managed care rates are appropriate, which places billions of federal and state dollars at risk for misspending. GAO made recommendations to improve CMS’s oversight. Oversight of supplemental payments needs improvement. GAO has reported on varied financing arrangements involving supplemental payments—disproportionate share hospital (DSH) payments states are required to make to certain hospitals, and other non-DSH supplemental payments—that increase federal funding without a commensurate increase in state funding. GAO’s work has found that while a variety of federal legislative and CMS actions have helped curb inappropriate financing arrangements, gaps in oversight remain. For example, while there are federal requirements designed to improve transparency and accountability for state DSH payments, similar requirements are not in place for non-DSH supplemental payments, which may be increasing. From 2006 to 2010, state-reported non-DSH supplemental payments increased from $6.3 billion to $14 billion; however, according to CMS officials, reporting was likely incomplete. GAO made numerous recommendations aimed at improving oversight of supplemental payments. Challenges exist related to CMS’s role ensuring program integrity. In December 2011, GAO testified that the key challenge CMS faced in implementing the statutorily established federal Medicaid Integrity Program was ensuring effective coordination to avoid duplicating state program integrity efforts, particularly in the area of auditing provider claims. GAO found that overpayments identified by its audit contractors since fiscal year 2009 were not commensurate with its contractors’ costs, and CMS reported in 2011 that it was redesigning its audit program to achieve better results. Data limitations may have hampered the contractors’ ability to identify improper claims beyond what states had already identified. With regard to CMS’s other core oversight activities—annual assessments and triennial comprehensive state program integrity reviews—GAO found that much of the information collected from the annual assessments duplicated information collected during triennial reviews. Finally, CMS’s Medicaid Integrity Institute, a national training program, appears to promote effective state coordination and collaboration.
Medicare is the federal program that helps pay for a variety of health care services for about 44 million elderly and disabled beneficiaries. Most Medicare beneficiaries participate in Medicare Part B, which helps pay for certain physician, outpatient hospital, laboratory, and other services; medical equipment and supplies, such as oxygen, wheelchairs, hospital beds, walkers, orthotics, prosthetics, and surgical dressings; and certain outpatient drugs. Medicare Part B pays for most medical equipment and supplies using a series of fee schedules. Generally, Medicare has a separate fee schedule for each state that includes most items, and there are upper and lower limits on the allowable amounts that can be paid in different states to reduce variation in what Medicare pays for similar items in different parts of the country. Medicare pays 80 percent of the lesser of the actual charge for the item or fee schedule amount for the item, and the beneficiary pays the balance. Beneficiaries typically obtain medical equipment and supplies from suppliers, who submit claims to Medicare on beneficiaries’ behalf. Suppliers include medical equipment retail establishments and outpatient providers, such as physicians, home health agencies, and physical therapists. To handle claims processing for medical equipment and supplies, CMS contracts with durable medical equipment Medicare administrative contractors. Using its authority under the BBA, CMS conducted a competitive bidding demonstration to set Medicare Part B payment rates for groups of selected medical equipment and supplies. CMS contracted with Palmetto Government Benefits Administrators (Palmetto) to administer the competitive bidding demonstration, which was implemented in two locations—the Polk County, Florida, metropolitan statistical area and parts of the San Antonio, Texas, metropolitan statistical area. Two cycles of bidding took place in Polk County, with competitively set fees effective from October 1, 1999, to September 30, 2001, and from October 1, 2001, to September 30, 2002. One cycle of bidding took place in San Antonio, and competitively set fees were effective from February 1, 2001, to December 31, 2002. Bidding and implementation processes were similar at both locations. The demonstration ended on December 31, 2002. In December 2003, the MMA required CMS to conduct competitive bidding for DME, supplies, off-the-shelf orthotics, and enteral nutrients and related equipment and supplies on a large scale. The MMA required that competition under the program begin in 10 of the largest metropolitan statistical areas in 2007, in 80 of the largest metropolitan statistical areas in 2009, and in other areas after 2009. The law established a new accreditation requirement for all Medicare suppliers of medical equipment and supplies and required CMS to develop financial and quality standards to use in selecting suppliers for the competitive bidding program. The law required CMS to take appropriate steps to ensure that small suppliers have an opportunity to be considered for participation in the competitive bidding program. CMS was required to establish a methodology for selecting bids from suppliers so that enough suppliers were selected to meet demand for competitively bid items within a given area. The law specified that at least two suppliers would be selected in each competitive area. The law also precluded judicial or administrative review of CMS’s decisions to establish payment amounts, award contracts, designate areas for competition, select items and services, phase in implementation, and determine the bidding structure and number of suppliers selected under the competitive bidding program. The MMA required that an advisory committee be established to assist in carrying out the program. To help implement the competitive bidding program, CMS published its notice of proposed rulemaking on May 1, 2006, and its final rule on April 10, 2007. CMS’s final rule provided more detail on the agency’s implementation steps. For example, the law specified that the agency could not award a contract to an entity unless it met applicable financial standards specified by the Secretary of HHS. In its regulation, CMS specified the financial documents that had to be submitted by suppliers to be considered as potential bidders. Similarly, while the law indicated that the agency needed to ensure that small suppliers had an opportunity to participate, the regulation sets out a process to include a certain number of small suppliers based on the percentage of those who bid and met all applicable requirements. CMS established the initial round of bidding in 10 metropolitan statistical areas that included Charlotte, N.C.; Cincinnati, Ohio; Cleveland, Ohio; Dallas, Tex.; Kansas City, Mo.; Miami, Fla.; Orlando, Fla.; Pittsburgh, Pa.; Riverside, Calif.; and San Juan, P.R. On April 9, 2007, CMS opened the initial registration of suppliers for the first round of bidding and the bid period opened on May 15, 2007. As part of its program implementation for the first round, CMS conducted a supplier-education campaign, which included meetings, listserve announcements, a dedicated Web site, and a toll-free help desk. The bid period closed on September 25, 2007. CMS concluded bid evaluations and began the contracting process in March 2008, and the agency plans to announce the first round of winning suppliers in May 2008. Suppliers whose bids were disqualified because their bid did not meet program and bidding requirements will receive a letter informing them of the reason or reasons for their disqualification. After the program begins, suppliers whose bids were not chosen generally cannot receive Medicare payment for the competitively bid items in the metropolitan statistical areas included in the competitive bidding program. However, suppliers of certain rental items or oxygen that did not become suppliers in the competitive bidding program could continue to serve their existing Medicare customers. Suppliers that did not have bids chosen in the first round of the program may bid in the future rounds of competition. CMS said it plans to conduct a beneficiary-education campaign before the program goes into effect on July 1, 2008. Competitive bidding could reduce Medicare program payments by providing an incentive for suppliers to accept lower payment amounts for items and services to retain their ability to serve beneficiaries and potentially increase their market share. Using competition to obtain market prices in order to set payments for medical equipment and supplies is a new approach for Medicare that is fundamentally different than relying on fee schedules based on suppliers’ historical charges to Medicare. Competitive bidding allows the market to provide information to CMS on what amounts suppliers will accept as payment to serve beneficiaries. In its demonstration, CMS used a competitive bidding process to determine which suppliers would be included and the competitively set fees that they would be paid. From among the bidders, the agency and Palmetto selected multiple demonstration suppliers to provide items in each group of related products. Suppliers could submit bids and have winning bids for one or more groups of items. These suppliers were not guaranteed that they would increase their business or serve a specific number of Medicare beneficiaries. Instead, the demonstration suppliers had to compete for beneficiaries’ business. All demonstration suppliers were reimbursed for each competitively bid item provided to beneficiaries at the demonstration fee schedule amounts. The new fee schedules were based on the winning suppliers’ bids for items included in the demonstration. Any Medicare supplier that served demonstration locations could provide items not included in the demonstration to beneficiaries. Evidence from the demonstration suggests that, for the items selected, competition helped set lower payment amounts and resulted in estimated program savings of $7.5 million. The demonstration’s independent evaluators also estimated that beneficiaries saved $1.9 million. The demonstration provided evidence to health policy experts, including us and the Medicare Payment Advisory Commission, that competitive bidding for medical equipment and supplies could be a viable way for the program to use market forces to set lower payments without significantly affecting beneficiary access. About a year after the demonstration ended, the MMA required CMS to implement competitive bidding on a large scale and added requirements that suppliers would have to meet to participate in the competitive bidding program. The MMA also required the agency to develop quality standards and for suppliers to be assessed on those standards by accreditation organizations. In addition, the agency had to include a financial and quality assessment of suppliers as part of competitive bidding. The competitive bidding program was structured to operate much like the demonstration. Suppliers submitted bids, along with other materials specified by CMS. The application required suppliers to submit 3 years of financial documents, including income statements, credit reports, and balance sheets. The review of the financial documents was used as part of the criteria for determining which bids to consider. The bidders had to have a valid Medicare supplier billing number and be accredited. Suppliers had to submit bids for one or more groups of items. CMS then evaluated the bids based on demand, capacity, and price and chose bids that were at or under a certain amount. CMS estimates that the first round of its competitive bidding program will result in payment amounts that overall average 26 percent less than the current fee schedule amounts for the groups of items included, leading to savings for the Medicare program and its beneficiaries. CMS based its estimate on the price points suppliers submitted with their bids, weighted by market area and past utilization of items in each group. The estimated savings differed by groups of items, with the largest savings of 43 percent estimated for mail-order diabetic supplies. Competitive bidding changes Medicare’s relationship with suppliers. Competitive bidding is designed to reduce payments by allowing CMS to choose suppliers based on their bids—a change from the long-standing policy that any qualified provider can participate in the program. The competitive bidding process was designed to limit the number of suppliers to those whose bids were at or under a certain amount while ensuring that enough suppliers were included to meet beneficiary demand. In the demonstration, 50 percent to 55 percent of the suppliers’ bids were selected. With few exceptions, only the suppliers whose bids were chosen could be reimbursed by Medicare for competitively bid items provided to beneficiaries residing in the demonstration area. Furthermore, competitive bidding could help reduce improper payments because it provides CMS with the authority to select suppliers, based in part on new scrutiny of their financial documents and other application materials. In November 2007, CMS estimated that 10.3 percent of Medicare payments made to suppliers of medical equipment and supplies were improper—more than double the percentage of improper payments to other Medicare providers. Providing additional scrutiny of suppliers gives CMS the opportunity to screen out those whose finances do not indicate that they are stable, legitimate businesses. Because of concerns that competitive bidding may prompt suppliers to cut their costs by providing lower-quality items and curtailing services, ensuring quality and access through adequate oversight is critical. Limiting the number of suppliers could potentially affect beneficiaries’ access to quality items and services if there are an insufficient number to meet their needs. For some beneficiaries, having a choice of suppliers for some items and services could be important. In our September 2004 report, we evaluated CMS’s competitive bidding demonstration and recommended implementation actions for CMS to consider, including how to ensure access to quality items and services for beneficiaries. We indicated that quality assurance steps could include monitoring beneficiary satisfaction, setting standards for suppliers, providing beneficiaries with a choice of suppliers, and selecting winning bidders based on quality, in addition to the dollar amounts of bids. The demonstration projects used several approaches for ensuring quality and services for beneficiaries, including monitoring beneficiary satisfaction and applying quality measures as criteria to select winning suppliers. During the demonstration, CMS and Palmetto used full-time, onsite ombudsmen to respond to complaints, concerns, and questions from beneficiaries, suppliers, and others. In addition, to gauge beneficiary satisfaction, independent evaluators of the demonstration fielded two beneficiary surveys by mail—one for oxygen users and another for users of other products in the demonstration. These surveys contained measures of beneficiaries’ assessments of their overall satisfaction, access to equipment, and quality of training and service provided by suppliers. Evaluators reported survey results indicating that beneficiaries generally remained satisfied with both the products provided and with their suppliers. The independent evaluators identified some areas for concern, including a decline in the use of portable oxygen among users and the possible shift away from suppliers making home deliveries, which may have indicated that suppliers were visiting new medical equipment users less frequently to provide routine maintenance visits. Because we considered careful monitoring of beneficiaries’ experiences essential to ensure that any quality or access problems were identified quickly, we recommended that CMS monitor beneficiary satisfaction with the items and services provided under the new competitive bidding program. As competitive bidding expands and affects larger numbers of beneficiaries, problems such as those identified in the evaluations of the demonstration projects could become magnified. Therefore, continued monitoring of beneficiary satisfaction will be critical to identifying problems with suppliers or with items provided to beneficiaries. When such problems are identified in a timely manner, CMS may develop steps to address them. Such monitoring is important, not just when required by statute, but as part of an ongoing effort to ensure that the Medicare program is serving its beneficiaries effectively. CMS agreed with our recommendation and stated that the agency would monitor the beneficiary satisfaction with the quality and services provided under the competitive bidding process. CMS also stated in the preamble of its final rule on accreditation of suppliers published August 18, 2006, that it expects that implementing medical equipment and supplies quality standards and accreditation will lead to increased quality of items and services throughout the industry. Furthermore, CMS stated that it plans to provide education to Medicare beneficiaries on the competitive bidding process using approaches such as press releases, fact sheets, and notices. We will be assessing CMS’s implementation of the competitive bidding program. As part of the MMA, we are required to review and report on the program’s impact on suppliers and manufacturers and on quality and access of items and services provided to beneficiaries. As part of this review, we have been specifically requested to assess CMS’s implementation of the program. We believe that competitive bidding could reduce payments for both the Medicare program and beneficiaries. The independent evaluators estimated savings achieved in the demonstration, and CMS has projected reductions in payment amounts in its competitive bidding program for both Medicare and its beneficiaries. In addition, the new financial standards and accreditation process being implemented in conjunction with the competitive bidding program should help improve the financial viability and quality of medical suppliers providing services to Medicare beneficiaries. But competitive bidding also provides incentives that could affect access to services and lower quality of items and services provided to beneficiaries, which need to be monitored carefully. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions that you or members of the Subcommittee may have. For further information regarding this testimony, please contact me at (202) 512-7114 or kingk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Sheila Avruch, Assistant Director; Catina Bradley; Kelli Jones; Kevin Milne; Lisa Rogers; and Timothy Walker made contributions to this statement. Medicare: Improvements Needed to Address Improper Payments for Medical Equipment and Supplies. GAO-07-59. Washington, D.C.: January 31, 2007. Medicare Durable Medical Equipment: Class III Devices Do Not Warrant a Distinct Annual Payment Update. GAO-06-62. Washington, D.C.: March 1, 2006. Medicare: More Effective Screening and Stronger Enrollment Standards Needed for Medical Equipment Suppliers. GAO-05-656. Washington, D.C.: September 22, 2005. Medicare: CMS’s Program Safeguards Did Not Deter Growth in Spending for Power Wheelchairs. GAO-05-43. Washington, D.C.: November 17, 2004. Medicare: Past Experience Can Guide Future Competitive Bidding for Medical Equipment and Supplies. GAO-04-765. Washington, D.C.: September 7, 2004. Medicare: CMS Did Not Control Rising Power Wheelchair Spending. GAO-04-716T. Washington, D.C.: April 28, 2004. Medicare: Challenges Remain in Setting Payments for Medical Equipment and Supplies and Covered Drugs. GAO-02-833T. Washington, D.C.: June 12, 2002. Medicare Payments: Use of Revised “Inherent Reasonableness” Process Generally Appropriate. GAO/HEHS-00-79. Washington, D.C.: July 5, 2000. Medicare: Access to Home Oxygen Largely Unchanged; Closer HCFA Monitoring Needed. GAO/HEHS-99-56. Washington, D.C.: April 5, 1999. Medicare: Need to Overhaul Costly Payment System for Medical Equipment and Supplies. GAO/HEHS-98-102. Washington, D.C.: May 12, 1998. Medicare: Home Oxygen Program Warrants Continued HCFA Attention. GAO/HEHS-98-17. Washington, D.C.: November 7, 1997. Medicare: Excessive Payments for Medical Supplies Continue Despite Improvements. GAO/HEHS-95-171. Washington, D.C.: August 8, 1995. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
For more than a decade, GAO has reported that Medicare has paid higher than market rates for medical equipment and supplies provided to beneficiaries under Medicare Part B. Since 1989, Medicare has used fee schedules primarily based on historical charges to set payment amounts. But this approach lacks flexibility to keep pace with market changes and increases costs to the federal government and Medicare's 44 million elderly and disabled beneficiaries. The Balanced Budget Act of 1997 required the Centers for Medicare & Medicaid Services (CMS)--the agency that administers Medicare--to test competitive bidding as a new way to set payments. CMS did this through a demonstration in two locations in which suppliers could compete on the basis of price and other factors for the right to provide their products. The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) required CMS to conduct competitive bidding on a large scale and suppliers to obtain accreditation. GAO was asked to describe the effects that competitive bidding could have on Medicare program payments and suppliers and the need for adequate oversight to ensure quality and access for beneficiaries in a competitive bidding environment. This testimony is based primarily on GAO work conducted from May 1994 to January 2007, which GAO updated by interviewing CMS officials and reviewing agency documents. Competitive bidding could reduce Medicare program payments by providing an incentive for suppliers to accept lower payments for items and services to retain their ability to serve beneficiaries and potentially increase their market share. Fundamentally different from fee schedules based on historical charges to Medicare, competitive bidding allows the market to help CMS determine payment amounts. In the demonstration, the new fee schedule amounts were based on the winning suppliers' bids for items included and 50 percent to 55 percent of the bids from suppliers were selected. Evidence from CMS's competitive bidding demonstration suggests that competition saved Medicare $7.5 million and saved beneficiaries $1.9 million--without significantly affecting beneficiary access. For the competitive bidding program, CMS required suppliers to obtain accreditation based on quality standards and provide financial documents to participate. This added scrutiny gives CMS the chance to screen out suppliers that may not be stable, legitimate businesses, which could contribute to lower rates of improper payment. CMS also evaluated the bids based on demand, capacity, and price and chose suppliers whose bids were at or under a certain amount. CMS estimates that the first round of its competitive bidding program will result in payment amounts that average 26 percent less than the current fee schedule amounts. Competitive bidding also changes Medicare's relationship with suppliers and departs from Medicare's practice of doing business with any qualified provider, because it is designed to limit the number of suppliers to those whose bids are at or under a certain amount. Because of concerns that competitive bidding may prompt suppliers to cut their costs by providing lower-quality items and curtailing services, ensuring quality and access through adequate oversight is critical for the success of the competitive bidding program. In September 2004, GAO indicated that quality assurance steps could include monitoring beneficiary satisfaction, setting standards for suppliers, giving beneficiaries a choice of suppliers, and selecting winning bidders based on quality and the dollar amount of the bids. As competitive bidding expands, problems that beneficiaries might experience could be magnified. Therefore, continued monitoring of beneficiary satisfaction will be critical to identify problems with suppliers or with items provided to beneficiaries. As required in the MMA, GAO will review and report on the competitive bidding program's impact on suppliers and manufacturers and its effect on quality and access for beneficiaries.
Since World War II, many employers have voluntarily sponsored health insurance as a benefit to employees for purposes of recruitment and retention, and many have also extended these benefits to their retirees. The federal tax code gives employers incentives to subsidize health benefits because their contributions can be deducted as a business expense, and these contributions are also not considered taxable income for employees. Employer-sponsored health benefits are regulated under the Employee Retirement Income Security Act of 1974 (ERISA), which gives employers considerable flexibility to manage the cost, design, and extent of health care benefits they provide. Working adults and retirees aged 55 to 64 rely on employer-sponsored coverage as their primary source of health insurance. In 1999, according to the Bureau of the Census’ Current Population Survey, employers provided coverage to 78 percent of all working adults aged 55 to 64 and to 57 percent of the 4 million retirees aged 55 to 64. Other retirees in this age group purchased individual (nongroup) health insurance or relied on Medicaid or other public insurance, and a significant portion—17 percent—were uninsured. (See fig. 1.) Retirees aged 65 or older typically rely on Medicare as their primary source of coverage. However, Medicare, which helps pay for hospital and physician expenses for acute care, has gaps in coverage that leave Medicare beneficiaries facing significant out-of-pocket costs. For example, Medicare does not cover most outpatient prescription drugs nor does it cover potentially catastrophic expenses associated with long-term stays in hospitals or skilled nursing facilities. As a result, most Medicare beneficiaries obtain supplemental insurance to cover some of these out-of- pocket costs. In 1999, according to the Current Population Survey, nearly one-third of the 23 million retirees aged 65 or older had Medicare with employer-sponsored supplemental coverage. Slightly more than one-third had Medicare with other sources of supplemental coverage. Most often, these beneficiaries had individually purchased supplemental coverage, known as Medigap, but some received assistance from Medicaid. The remaining portion of retirees had Medicare without supplemental coverage. However, many of these are enrolled in Medicare+Choice plans, which provide beneficiaries an alternative to traditional fee-for-service Medicare and typically have nominal cost-sharing requirements and often cover additional services, such as prescription drugs. Data from the 1998 Medicare Current Beneficiary Survey indicate that half of Medicare beneficiaries with Medicare-only coverage were enrolled in a Medicare+Choice plan. Medicare with employer supplemental coverage Medicare with other supplemental coverage Of the 23.4 million Americans aged 55 to 64 in 1999, 4.0 million (17 percent) were retired. For these retirees, “public” coverage includes Medicaid, Medicare (for eligible disabled individuals), and health care through the Departments of Defense or Veterans Affairs. Of the 32.6 million Americans aged 65 or older in 1999, 23.4 million (72 percent) were retired, with the remainder either still working or not working for reasons other than retirement. “Medicare without supplemental coverage” includes both traditional fee-for-service Medicare and Medicare+Choice plans because the Current Population Survey does not distinguish between these types of Medicare coverage. “Medicare with other supplemental coverage” includes those with individually purchased Medigap and Medicaid. “Other” includes those without Medicare but receiving employer-sponsored health insurance, Medicaid, or health care through the Departments of Defense or Veterans Affairs. The health care needs and costs of retired Americans are likely to grow significantly as the baby boom generation nears retirement age. As shown in figure 2, the number of individuals aged 55 to 64 will increase by 75 percent by 2020, and the number of people aged 65 or older will double by 2030. The sheer numbers of baby boomers and greater numbers of people reaching age 85 and beyond are expected to have a dramatic effect on the number of people needing long-term and other health care services because the prevalence of disabilities and dependency increases with age. Projections of the number of disabled elderly individuals who will need such care range from 2 to 4 times the current number. Insurance coverage, and access to effective preventive, acute, and long- term care, is particularly important for maintaining the health of older adults. For those individuals needing nursing home or other extensive continuing care, the costs can be substantial. On average, nursing home care costs an individual about $55,000 annually. Individuals needing care and their families pay a significant portion of long-term care costs out-of- pocket. Employer sponsorship of retiree health benefits continues to erode, with about one-third of large employers and few small employers currently offering health benefits to their retirees. Even when employers continue to offer insurance, many have reduced coverage by tightening eligibility requirements, increasing the share of premiums retirees pay for health benefits, or increasing copayments and deductibles. Increasing cost pressures on employers, such as rising premiums and a weakening economy, suggest that erosion in retiree health benefits may continue. The availability of employer-sponsored retiree health benefits has declined during the last decade. Two widely cited surveys—by William M. Mercer, Incorporated, and the Kaiser Family Foundation and Health Research and Educational Trust (Kaiser/HRET)—indicated that nearly half of large employers offered retiree health benefits in the early 1990s, but their most recent surveys reported that this proportion has declined to about one-third of large employers. (See fig. 3.) The decline in large employers offering retiree health benefits has continued in recent years, despite several years during the latter part of the 1990s experiencing a strong economy and relatively small premium increases. Large employers are less likely to offer these benefits to Medicare-eligible retirees than to retirees under age 65. These surveys also found that large employers are more likely to sponsor health insurance for retirees than are small firms, with fewer than 10 percent of the latter doing so. While fewer employers sponsor retiree health benefits now, the percentage of retirees obtaining health benefits through an employer has remained relatively stable in recent years. According to our analysis of the Current Population Survey, over half of retirees aged 55 to 64 and about one-third of retirees 65 or older had employer-sponsored coverage in 1999. (See fig. 4.) Since 1994, the percentage of both retirees aged 55 to 64 and those 65 or older with employer-sponsored coverage has varied from year to year by only 1 or 2 percentage points. This stability in coverage may exist in part because employers tend to reduce coverage for future rather than current retirees. Some employers that continue to offer retiree health coverage have adopted several strategies to limit their liability for these costs. These strategies include the following: Restricting eligibility. According to Mercer’s data, among the 36 percent of large employers sponsoring health benefits for retirees younger than 65 in 2000, about 5 percent did so for only selected employees. The remaining 31 percent offered retiree health benefits to most retirees.Increasing retirees’ share of premiums. The Mercer survey found that as many as one-fourth of employers increased retirees’ share of premium contributions within the past 2 years. About 40 percent of large employers that offer health benefits to retirees younger than 65 require those retirees to pay the entire premium—an increase of about 8 percentage points since 1997. Increasing retirees’ out-of-pocket costs. Both the Mercer and Kaiser/HRET surveys found that more than 10 percent of employers recently increased retirees’ potential out-of-pocket costs for deductibles, coinsurance, and copayments. In particular, the Kaiser/HRET survey reported that one-third of employers have increased the amount that retirees pay for prescription drugs within the past 2 years. Limiting future commitments. The 1999 Kaiser/HRET survey found that in the previous 2 years 35 percent of large firms offering retiree health benefits limited their future financial commitment by implementing a cap on projected contributions for these benefits. Benefit consultants we interviewed stated that employers typically set their cap prospectively at a level higher than current spending, and if spending approaches the cap, they can either reduce benefits to stay within the cap or raise the cap. Some employers are considering, but few have implemented, a more fundamental change that would shift retiree health benefits to a defined contribution plan. Under a defined contribution plan, an employer directly provides each retiree with a fixed amount of money to purchase insurance coverage, either in the individual market or through a choice of plans offered by the employer. The individual is then responsible for the difference between the employer’s contribution and the selected plan’s total premium. Benefit consultants have reported that many employers would prefer to move toward a defined contribution approach. However, several issues, such as retirees’ readiness to assume responsibility for managing their own health benefits and contractual bargaining agreements with union plans, could limit employers’ ability to make such a fundamental change. Increasing economic pressures and evolving demographic trends could lead employers to reevaluate their provision of retiree health benefits and could result in further erosion of benefits. The following are contributing factors: Health insurance premium increases, which were less than the general inflation rate from 1995 to 1997, began to rise faster than general inflation in 1998 and were about 6 or 8 percentage points above the general inflation rate in 2001. The weakening economy may lead employers to reevaluate employee salary and benefit levels. Specifically, the nation’s gross domestic product increased at an annual rate of 2.4 percent in the second quarter of 2001, slower than the 4.2 percent and 5.0 percent growth in 1999 and 2000. Also, the nation’s unemployment rate has gradually but steadily increased to 4.9 percent as of September 2001 after reaching a historic low of 3.9 percent 1 year earlier. Many economists expect a further weakening of the economy, at least in the short term, as a result of the September 11 terrorist attacks. The aging of the baby boom generation will increase the proportion and number of Americans of retirement age, leading some employers to have a larger number of retirees for whom they provide coverage but comparatively fewer active workers to subsidize these benefits. Other factors have increased employers’ uncertainty about their future role in providing retiree health benefits, but their implications are less clear. For example, if a proposed outpatient prescription drug benefit was added to Medicare, some employers could redesign their coverage to supplement the Medicare benefit, while others could choose to reduce or eliminate drug coverage. General workforce trends could also affect the availability of retiree health benefits. While some anecdotal information suggests increasing mobility of the workforce with fewer long-term job attachments, the data on this trend are mixed. Nonetheless, the percentage of workers with 20 or more years with a current employer has declined in recent decades and could indicate that fewer employees are likely to be eligible for retiree benefits that are often based on longevity with an employer. In addition, a March 2001 ruling in the Third U.S. Circuit Court of Appeals found an employer—Erie County, Pennsylvania—in violation of the Age Discrimination in Employment Act (ADEA) because it offered a benefit for Medicare-eligible retirees that the District Court found to be inferior to the benefit offered retirees not yet eligible for Medicare. To what extent the decision will lead to limitations on employers’ flexibility in designing their retiree health benefits, and therefore discourage employers from offering such benefits, remains uncertain. This will depend, in part, on whether other circuit courts adopt similar interpretations of ADEA and which differences in benefits employers provide to non-Medicare-eligible and Medicare-eligible retirees are regarded as potential age-discrimination violations. The Equal Employment Opportunity Commission (EEOC) had initially said it would consider employers’ reducing or eliminating retiree health benefits on the basis of a person’s age or Medicare eligibility an ADEA violation. However, recognizing concerns raised by employers and unions that this decision could have adverse consequences on the availability of retiree health benefits, EEOC rescinded this policy statement on August 17, 2001. It is considering alternative policies to ensure that health benefits provided to Medicare-eligible retirees are consistent with ADEA without adversely affecting employers’ sponsorship of retiree health benefits. At an age when their health care needs are likely to grow, retirees who lose access to employer-sponsored coverage may face limited coverage alternatives, and those who are unable to obtain coverage may do without or begin to rely on public programs. Some federal laws guarantee access to alternative sources of coverage to both retirees under 65 and those eligible for Medicare; but these options may be costly or limited, particularly for individuals in poor health. A problem apart from whether employer-provided retiree health coverage is available is the potential financial burden of long-term care. Medicare and the private insurance available to most retirees do not typically cover costs of long-term care services that are increasingly needed as the prevalence of disability grows with advancing age. Thus, paying for these services may present a significant and growing financial burden for many individuals and for public health care programs. Employers have been the predominant source of health coverage for most working adults. Although more than half of retirees report that they intend to continue working, the jobs they take are often part-time, or they are self-employed, and neither situation is likely to offer health benefits. Some individuals retire because of declining health—more than one-fifth of retirees aged 55 to 64 report being in fair or poor health—which further highlights their need for health insurance coverage. Therefore, even in retirement, over half of those aged 55 to 64 in 1999 continued to rely on health insurance either from their former employer or their spouse’s employer. However, retirees without access to employer-sponsored coverage either seek an alternative source of health insurance or become uninsured. Individuals whose jobs provided health benefits that ended at retirement may continue temporary coverage through their employer for up to 18 months under provisions enacted as part of COBRA. But COBRA coverage may be an expensive alternative because the employer is not required to pay any portion of the premium and may charge the enrollee up to 102 percent of the group rate. The individual insurance market may be an option for some retirees until they become eligible for Medicare, but this alternative can be costly as well. Unlike the employer-sponsored market, where the price for coverage is based on risk characteristics of the entire group, premium prices in the individual insurance market in most states are based on the characteristics of each applicant, such as age, gender, geographic area, tobacco use, and health status. For example, premiums charged a 60- year-old man may be 2-1/2 times to nearly 4 times higher than those charged a 30-year-old man. For eligible individuals leaving group coverage, the Health Insurance Portability and Accountability Act of 1996 (HIPAA) guarantees access to at least two individual insurance policies or an alternative such as a state high-risk pool, regardless of health status and without exclusions. Nevertheless, the premiums faced by retirees eligible for HIPAA protections, as well as by other retirees who must rely on the individual insurance market for coverage, may be substantially higher than those charged to healthier or younger individuals and may be cost- prohibitive. This is because retirees are more likely than working adults of the same age to be in fair or poor health. Unless they are guaranteed coverage by HIPAA, individuals with serious health conditions such as heart disease are virtually always denied coverage, and those with other, non-life-threatening conditions such as chronic back pain also may be excluded from coverage. Under a group plan, these individuals cannot be denied coverage, nor can they be required to pay a higher premium than others in the plan, and specific conditions can only be temporarily excluded from coverage. Although Medicare is the primary source of coverage for retirees 65 years or older, gaps in Medicare coverage mean this population may have high out-of-pocket costs for health care. For example, Medicare does not typically cover outpatient prescription drugs, and it primarily covers acute care but not long-term hospital and skilled nursing facility stays. Most Medicare-eligible retirees obtain supplemental coverage to pay some of the costs not covered by Medicare. Nearly one-third of Medicare-eligible retirees obtain this supplemental coverage from an employer, and most other Medicare beneficiaries seek other sources of supplemental coverage, such as Medigap or Medicaid, or participate in Medicare+Choice plans, which typically have low cost-sharing requirements and cover services such as prescription drugs that traditional Medicare does not cover. Retirees can purchase private individual Medigap coverage, but this coverage may cost more or be less comprehensive than typical employer- sponsored health coverage. Medigap policies are widely available to 65- year-old Medicare beneficiaries during an initial 6-month open-enrollment period guaranteed by federal law. Beneficiaries can select from among 10 standard policy types. Most purchasers buy mid-level policies that cover Medicare’s cost-sharing requirements and selected other benefits, but not prescriptions. Relatively few Medigap purchasers (8 percent of those with a standardized Medigap policy) have bought the standardized plans that include prescription drug coverage. Whether they include prescription drug coverage or not, Medigap policies can be expensive—the average annual Medigap premium per covered life was more than $1,300 in 1999— and still leave retirees with significant out-of-pocket costs. Medigap policies that provide prescription drug coverage average more than $1,600 compared with about $1,150 for standardized plans without prescription drug coverage. However, even the standardized coverage for prescription drugs pays less than half of beneficiaries’ drug costs, and catastrophic prescription drug expenses are not covered. Access to Medigap policies may be more limited for beneficiaries who are not in the initial open-enrollment period or otherwise eligible for federally guaranteed access under certain other circumstances. For example, federal law provides certain guarantees to ensure an individual has access to Medigap insurance if an employer eliminates or reduces coverage. In these cases, the individuals are guaranteed access to 4 of the 10 standardized Medigap policies, regardless of their health status, but none of these 4 guaranteed plans includes prescription drug coverage. Although long-term care is a growing need for the retiree population, Medicare and private insurance (through employers or purchased individually) play a small role in financing this care. Public programs, primarily Medicaid, and individuals’ out-of-pocket payments are the primary funding sources for nursing home and home and community- based care for those needing long-term care. In 1999, spending for nursing home and home health care was about $134 billion. Medicaid, which is generally only available after individuals have become nearly impoverished by spending down their assets, paid the largest share of these costs—nearly 44 percent. Individuals needing care and their families paid for almost 25 percent of these expenditures out-of-pocket. Medicare has traditionally primarily covered acute care, but during the 1990s it increasingly covered some long-term home health care services. In 1999, Medicare paid nearly 14 percent of nursing home and home health care. (See fig. 5.) While private long-term care insurance is viewed as a possible way to reduce catastrophic financial risk for the elderly and relieve some of the financing burden now shouldered by public programs, private insurance (through both long-term care insurance and traditional health insurance) accounted for a small share—10 percent in 1999—of long-term care spending. Most long-term care insurance is purchased individually, with premiums depending on the beneficiary’s age at purchase. Premiums for a 65-year-old are typically about $1,000 per year and may be much higher for more generous coverage or older buyers. The private long-term care insurance market remains small, and few employers offer this insurance as a benefit to employees. Less than 10 percent of individuals 65 or older and an even lower percentage of those younger than 65 have purchased long-term care insurance. Most private long-term care insurance is bought by individuals, but some employers offer employees a voluntary group policy option for long-term care insurance. Only about one-fourth of long-term care insurance policies sold as of 2000 were group offerings, according to the American Council of Life Insurers. Even when employers offer long-term care insurance, they usually do not subsidize any of the costs. In 2000, the Congress passed legislation to offer optional group long-term care insurance to federal employees, retirees, and their relatives beginning by fiscal year 2003, with eligible individuals paying the full premium for the insurance. This initiative will likely establish the largest group offering of long-term care insurance and could encourage further expansion of this market. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions that you or Members of the Subcommittee may have. For more information regarding this testimony, please contact Kathryn G. Allen at (202) 512-7118 or John Dicken at (202) 512-7043. Susan Anthony and Carmen Rivera-Lowitt also made key contributions to this statement.
In 1999, about 10 million Americans aged 55 and older relied on employer-sponsored health benefits until they became eligible for Medicare or to pay for out-of-pocket expenses not covered by Medicare. However, the number of employers offering these benefits has declined considerably during the past decade. Despite the recent strong economy and the relatively low increases in health insurance premiums during the late 1990's, the availability of employer-sponsored health benefits for retirees has declined. Two widely cited surveys found that only about one-third of large employers and less than 10 percent of small employers offer such benefits. Alternative sources of health care coverage for retirees may be costly, limited, or unavailable. Retirees not yet 65 may be eligible for coverage from a spouse's employer or from their former employer. Other retirees not yet 65 may seek coverage in the individual insurance market, but these policies can be expensive or may offer more limited coverage, especially for those with existing health problems. Nearly one-third of retirees eligible for Medicare have employer-sponsored supplemental coverage, but many others buy private supplemental coverage known as "Medigap." It can cost upwards of $1,300 per year for Medigap policies that include prescription drug coverage. Neither Medicare nor private insurance covers a significant share of long-term care expenses.
The TANF block grant was created by the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (PRWORA), and changed the federal role in financing welfare programs in states. PRWORA ended families’ entitlement to cash assistance by replacing the Aid to Families with Dependent Children (AFDC) program—essentially a federal-state matching grant—with the TANF block grant, a $16.5 billion per year fixed federal funding stream to states. PRWORA coupled the block grant with an MOE provision, which requires states to maintain a significant portion of their own historic financial commitment to their welfare programs as a condition of receiving their full TANF allotments. This helped to ensure that states remained strong fiscal partners. PRWORA provided states greater flexibility and responsibility for administering and implementing their welfare programs. Importantly, with the fixed federal funding stream, states must assume the fiscal risks in the event of a recession or increased program costs. In addition to increased flexibility and the new fiscal structure, PRWORA charged HHS with oversight of states’ TANF programs and gave HHS new responsibilities for tracking state performance. PRWORA also set federal requirements that states must impose on many families receiving cash or other ongoing assistance, including time limits and work requirements for adults. At the same time, the law restricts HHS’s authority to regulate states’ programs and reduced the number of federal employees involved in the program. TANF and MOE spending is one component of federal, state, and local spending on a range of programs aimed at serving low-income and needy populations, which in this report we will refer to as welfare-related spending. In state fiscal year (SFY) 2004, among the nine states in our study, TANF and MOE spending represented from 12 to 28 percent of all federal, state, and local spending flowing through the state budgets for welfare-related services outside of the health spending captured in our survey. (See app. II.) Outside of TANF and MOE, welfare-related spending provides a wide range of services and comes from a variety of federal, state, and local sources. Transportation subsidies, rental assistance, child care subsidies, heating and energy assistance, and low-income tax preferences, among others, can all serve low-income and needy populations and are funded through multiple federal agencies, such as the Department of Housing and Urban Development, HHS, and the Department of Transportation, as well as by state and local governments. In 2001, we examined welfare-related spending in 10 selected states before and after the passage of welfare reform, from SFY 1995 to SFY 2000. We reported that after welfare reform, since both the amount of flexible federal TANF funds and required MOE remain fixed regardless of the number of people served with these funds, and since cash assistance caseloads declined dramatically since the mid-1990s, states had additional budgetary resources available for use toward a variety of welfare-related purposes and spending. From SFY 1995 to SFY 2000, while total spending levels for all welfare-related services generally increased, states began using these additional budgetary resources to enhance spending for noncash services, such as training, education, and a range of other welfare- related spending—an allowable practice under TANF. As state TANF programs and welfare-related spending evolved after welfare reform, the nation’s welfare system now looks quite different than it did under AFDC. Our previous findings focused on a period of sustained economic growth and increasing tax collections in states. From 1995 to 2000, state government tax collections grew in inflation-adjusted terms, and unemployment and poverty rates were generally falling, although there was some variation among the nine states we studied. Overall, these circumstances suggest that states were generally faced with declining spending demands from low-income populations and increasing fiscal resources to meet those demands. In 2001, however, the nation experienced a recession from March through November, and a contrasting set of economic and fiscal circumstances developed. A period of rising unemployment and declining state tax collections ensued. In seven of the nine states, poverty rates that fell from 1995 to 2000 increased from 2000 to 2004, as shown in figure 2. These shifts suggest that, in general, states were faced with an increased demand for services aimed at low-income populations at a time when fewer fiscal resources were available to meet these demands after the recession. According to data provided by the states, total welfare-related spending rose over the decade in each of the nine states. Health spending accelerated as the decade progressed, increasing faster over the decade than nonhealth spending, which varied somewhat by state and period. Health and nonhealth spending from both federal and state sources increased over the decade, a reflection of the strong fiscal partnership between the federal government and states in supporting low-income individuals. However, while the federal share of health care spending remained fairly consistent over the decade, the federal share of nonhealth spending varied over time. In the nine states, spending for low-income people for health and nonhealth services increased over the decade since welfare reform. These spending levels, shown in figure 3 for each of the three points in time we examined, include federal and state funds that flowed through state budgets for programs targeting low-income and at-risk individuals. The figure excludes spending for the elderly, for those in institutions, and for long-term care. In general, health spending accelerated over the decade. The median growth rate increased from 11 percent in the first period (from 1995 to 2000) to 40 percent in the second period (2000 to 2004), as shown in table 1. Colorado and Oregon were exceptions, with larger increases during the strong economy of the late 1990s. States often cited increases in eligible populations and rising pharmaceutical and service delivery costs as the primary reasons for the rapid spending growth in this area. The health spending we examined in this report included state spending from federal and state sources for any health care program for working age adults and children, excluding long-term and institutional care. While this spending included such services as public health initiatives—outreach, prevention, diagnosis, care, and children’s vaccines, most funds were spent on the State Children’s Health Insurance Program (SCHIP) and the Medicaid program. Medicaid is a complex program that serves many different low-income populations. Nationwide, children and their families constitute 75 percent of those served but only account for 30 percent of expenditures, while those with disabilities represent 16 percent of beneficiaries and 45 percent of expenditures. Between 1995 and 1997, the number of able-bodied adults and children on Medicaid fell, which may be due in part to changes in the relationship between TANF and Medicaid triggered by the 1996 welfare legislation. At the same time, states were starting to enroll low-income children in SCHIP, a new federal-state partnership created by Congress in 1997. It extends health insurance to low-income children whose families earn too much to be eligible for Medicaid but are unable to obtain insurance another way, either through an employer or outright purchase of private insurance. Nationwide, enrollments in Medicaid and SCHIP generally increased from 2000 to 2004. Even so, not all low-income individuals are eligible for Medicaid or SCHIP, and some of those who are eligible are not enrolled for a variety of reasons, including lack of information about the program or choosing not to enroll. Because health spending grew faster than nonhealth spending since 1995, it now consumes a greater share of welfare-related spending in the state budgets we examined, as shown in table 2. In eight of our nine states, health care accounted for at least 45 percent of welfare-related spending for low-income programs from federal and state sources by 2004. This mirrors a nationwide trend of rising health costs, raising concerns about growing government expenses for health programs. Nonhealth spending also generally increased after 1995, although at a slower rate and with more variation among the states and time periods, as shown in table 3. Nonhealth spending includes the following categories: cash assistance, employment services and training, work and other supports, and aid for the at-risk. Spending in these combined categories occurs through a wide variety of federal and state programs that can serve low-income and needy populations. While we found that spending increased overall when looking at all these programs combined, some differences emerged when compared with health spending. Since 1995, median nonhealth spending increased 17 percent, in contrast to the 61 percent median growth rate for health. Because nonhealth spending includes so many different federal and state programs and services, it is difficult to clearly identify factors that explain spending changes overall. However, our previous work and our discussions with state officials show that the spending outcomes reflect a multitude of factors, including changes in the numbers and needs of eligible populations and in federal and state policy and fiscal situations. We provide more information on the factors affecting spending changes in this area in the next section. Federal and state governments are important fiscal partners when it comes to providing many types of assistance to low-income and at-risk individuals. Our analysis of state expenditures showed that the spending increases evident since 1995 were substantially supported by both federal and state funds in the health and nonhealth areas in both time periods. (For more details on federal and state spending, see app. III.) The state contribution is noteworthy particularly during the second time period when states experienced declining revenues. States generally are required to balance their operating budgets, and may need to raise revenues or reduce spending to do so. At the same time, many of the key federal programs for low-income individuals are structured in a way to help ensure that states maintain their financial commitment to these programs in order to receive continued federal support. In the health area, federal and state funds spent on health services grew at roughly the same rate over the decade, resulting in a fairly stable split in federal and state shares of spending over time. As shown in table 4, in 2004, the median federal share of health spending totaled 58 percent, which would correspond to a state share of 42 percent. The higher federal shares in some states, such as Louisiana, may be explained in part by the greater role the federal government plays in funding Medicaid costs in states with lower per capita incomes. At the same time, because the health spending data include services other than Medicaid, the federal share will not correspond directly to the share under Medicaid. In the nonhealth spending area, we also found spending increases generally supported by both federal and state funds, although the federal share showed more variation over the two time periods for nonhealth than for health spending. As shown in table 5, the median federal share fell in 2000 (from 50 percent to 44 percent), possibly as states responded to higher state revenues during the late 1990s. In 2004, the median federal share rose to 49 percent, possibly as a reflection of the tighter fiscal conditions states faced in this time period. In addition, the federal share of nonhealth spending grew more consistent among the states over the decade. The federal share ranged from 33 to 73 percent in 1995, tightening to range from 43 to 61 percent by 2004. It is important to highlight the distinction between the health and nonhealth areas again when discussing the federal and state shares of spending. In contrast to the health area where much of federal and state financial participation is guided by federal Medicaid statute and regulations, nonhealth spending—comprising numerous federal and state programs—is guided by an array of different laws and rules about federal and state financial participation. Specifically, supports for low-income people vary in terms of whether they are funded with federal funds, state- local funds, or a combination. While several key funding sources, such as the TANF block grant, foster care, food stamp administrative costs, and others require state matching and MOE provisions, others do not. In these cases, funding decisions are left entirely up to states. The overall increases in spending for nonhealth services in the nine states mask some substantial shifts over the decade in how states spent federal and state funds for low-income people. Two trends emerged. First, spending shifted away from cash assistance programs toward other types of aid and services (excluding health). Second, this expansion in noncash spending was strongest from 1995 to 2000, and spending increased further—but more slowly—from 2000 to 2004. Spending for work and other supports, particularly child care and development, was a key growth area in several states, reflecting state efforts to support welfare reforms that focused on employment. Spending on the various nonhealth services varied among the states, reflecting to some extent different state spending priorities. In general, states reported that increases in these areas were driven by policy changes to welfare and other social programs, increased program costs and demand, and increases in federal grants. By 2004, the nonhealth portion of state spending (from federal and state sources) for low-income services looked substantially different than it did in 1995. In all of the nine states, the total portfolio of nonhealth services shifted away from cash assistance toward other programs, as demonstrated in figure 4. For example, in New York, 33 percent of total nonhealth spending was devoted to cash assistance in 1995, compared with 13 percent in 2004. Other shifts among the noncash assistance categories varied by state and period, reflecting differing spending priorities. For example, work and other supports increased from 39 percent to 58 percent of the welfare-related budget in Wisconsin over the decade, while in Louisiana, the same category declined from 37 percent to 31 percent. Figure 4 also shows the relative size of the nonhealth categories. Employment services and training remained the smallest category in most of these states over the decade. Although cash assistance began the decade as a larger category in many states, by 2004 it was generally the second smallest category. Over the decade, work and other supports grew to become the second largest category in most states, and aid for the at-risk generally remained or became the largest category. The aid to the at-risk category includes spending for child welfare, juvenile justice, mental health, and other related services. Cash assistance spending declined dramatically from 1995 to 2000 in all case study states and varied from 2000 to 2004, as shown in table 6. Although some states increased spending after 2000, all nine states experienced at least a 50 percent decline in cash assistance spending over the decade. In all of the states, a dramatic decrease in cash assistance caseloads led to the decline in spending in this area, particularly from 1995 to 2000. In our previous work, we found that several factors have been cited to explain the large reductions in cash assistance caseloads. These include changes in welfare programs; the strong economy of the late 1990s; and other policy changes, such as expansions of the federal EIC and increased federal spending for child care subsidies. One state attributed the more recent caseload increases to the economy. Many state officials also noted changes in the characteristics of those who remained on the welfare rolls. They told us that after the shift to a work-first approach, the caseloads stabilized as the most employable recipients transitioned into the workforce. They said that the remaining cash assistance recipients tend to have multiple barriers to employment and require a wider and costlier range of services to enable them to be self-sufficient. In general, spending for other noncash categories combined (employment services and training, work and other supports, and aid for the at-risk) increased significantly after welfare reform, but slowed from 2000 to 2004 in most of these states, as shown in table 7. While most states increased spending after 2000, some states cited challenges in maintaining their initial rate of growth as their fiscal situations tightened. In contrast to cash assistance spending, which declined sharply during the first period, noncash expenditures rose dramatically in the first period and generally continued to rise during the second period, but at a slower rate. State spending patterns for training and education varied, although one trend related to welfare reform was evident. As shown in table 8, six states expanded employment services and training spending after 1995, in part to meet the increased employment focus of their TANF programs. Then five of these states cut this spending back as state revenues declined after 2000. For example, as cash assistance caseloads declined in Wisconsin from 1995 to 2000, it more than doubled spending for employment services and training. However, as cash assistance caseloads increased after 2000, spending for employment services and training was reduced 44 percent. Even so, spending for employment services and training ended more than 30 percent higher at decade end than at its beginning. In addition, in California, a large amount of TANF funds were moved into the training and education area from 1995 to 2000, but some of these funds were removed after 2000. In contrast, two states reduced their training and education spending during the period immediately following welfare reform, but expanded this spending after 2000. Over the decade, state spending generally increased by a higher percentage for work and other supports than for any other nonhealth category. In most case study states, this was the second largest nonhealth category in 2004. In each period, most of these states increased spending in this area, although the median increase was much smaller after 2000, as shown in table 9. These expansions are consistent with our previous work, which found that many states expanded the availability of supports that promote employment and economic independence for low-income families. Louisiana was the only state to experience a substantial spending decline in this category from 1995 to 2000. Louisiana increased spending for two areas in this category, including child care and development, as discussed below. However, these increases were more than offset by other spending areas that decreased, including administrative costs for food stamps, associated with declining food stamp caseloads in the state. After 2000, spending for work and other supports increased as Louisiana invested TANF funds in additional programs, particularly prekindergarten. In contrast, Colorado increased spending on refundable tax credits for working families during the robust economic growth of the 1995-2000 period, but decreased spending slightly from 2000 to 2004. The child care and development area was the main driver of spending changes in this category in many of these states, with high rates of growth as shown in table 10. In five states, more than half of all growth in the category was due to increased spending for child care and development. Several states reported that child care continued to be in demand, even as TANF caseloads fell, because many working parents relied on subsidized child care to help them keep their jobs. While most spending in this area is focused on child care subsidy programs, some states also increased spending for prekindergarten and other child development programs. As part of the 1996 welfare reform, the federal government increased funding to states through the Child Care and Development Fund (CCDF) to subsidize child care assistance for low-income families who were working or preparing for work through education and training, with a special emphasis on families working their way off welfare. In addition to CCDF, funds allocated by the nine states for child care or development included TANF, MOE, and other funds. Table 10 shows that substantial investments of these funds for child care and development accompanied welfare reforms in the first period and continued, in almost all of these states at a slower rate of increase, in the second period. Other areas of expansion included some entitlement or federal grant programs, such as tax credits, housing, or food assistance. Four states (Colorado, Maryland, New York, and Wisconsin) began or expanded state EIC programs to complement the federal EIC program, which offers work incentives in the form of a tax credit based on income. Food assistance spending increased in most states due to increased administrative costs related to expanding food stamp benefit rolls, although the benefit costs are not reflected here. Two states told us they had engaged in publicity campaigns to encourage eligible recipients to sign up for federally funded programs such as food stamps or EIC. Spending on aid for the at-risk, generally the largest nonhealth category, increased over the decade, although growth slowed considerably in most states after 2000, as shown in table 11. This category includes spending for child welfare, mental health, developmental disabilities, juvenile justice, substance abuse prevention and treatment, and related spending. Among these, the largest areas of spending were child welfare, mental health, and developmental disabilities. Officials in several states told us that there were increases in the costs of providing services for these three areas, as well as increased demand for child welfare and other services. However, several states had growth rates under 10 percent after 2000, because of decreases in spending areas such as juvenile justice, substance abuse, and developmental disabilities. Child welfare spending increased considerably in most of the nine states over the decade, primarily from 1995 to 2000, as shown in table 12. This includes spending for key federal/state partnership programs such as foster care, adoption assistance, and other child welfare services. Nationwide, child welfare systems investigate abuse and neglect, provide placements to children outside their homes, and deliver services to help keep families together. TANF and MOE funds played an important role in four states, which increased TANF-related spending until it accounted for 19 to 32 percent of child welfare spending by 2004. The combination of a substantial decline in traditional cash assistance caseloads, new flexibilities under PRWORA, and states’ implementation of their welfare reforms resulted in a changing role for TANF and MOE dollars across state budgets. The change from the previous welfare program— with its open-ended federal funding that matched state expenditures for monthly cash assistance—to the federal TANF block grant—with fixed federal funding and a specified level of state spending—gave states broader discretion over the types of services and activities to fund toward welfare reform goals. This change also gave states broader discretion over the amount of federal TANF and state MOE funds to spend in a given year, subject to minimum levels required under the MOE provisions. Under this new fiscal framework, the landscape of spending for traditional welfare funds changed substantially since welfare reform. TANF and MOE dollars played an increasing role in state budgets outside of traditional cash assistance payments, for programs to encourage work, help former welfare recipients keep their jobs, and provide services to needy families that did not necessarily ever receive welfare payments. However, with this shift, gaps arose in the information gathered at the federal level to ensure state accountability. Existing oversight mechanisms focus on cash assistance, which no longer accounts for the majority of TANF and MOE spending. As a result, there is little information on the numbers of people served by TANF-funded programs, meaning there is no real measure of workload or of how services supported by TANF and MOE funds meet the goals of welfare reform. Since welfare reform, states have increasingly spent TANF and MOE funds for aid and services outside of traditional cash assistance payments. Before welfare reform each of our study states spent some federal and state AFDC-related funds in spending categories other than cash assistance. However, by 2004, most of the states had significantly increased their use of TANF and MOE funds in these noncash categories compared with the level of spending in 1995, as shown in figure 5. The TANF block grant played a critical role in this shift in spending priorities. Under the block grant structure, states’ fixed annual TANF allotments did not change as cash assistance caseloads fell. In addition, states still had to meet maintenance of effort requirements by spending at least 75 percent of the amount they had spent in the past when caseloads were much higher. States faced choices about how to use these funds, including whether to leave some amount of their annual grant in reserve at the U.S. Treasury to help them meet any future increases in welfare costs. TANF funds not spent by states accumulate as balances in the U.S. Treasury. Our previous work showed that several trends emerged in this new welfare environment. First, many states increased their efforts to engage more welfare families in work or work-related activities in keeping with key TANF program requirements. More specifically, to avoid financial penalties, states were to meet specified work participation rates by engaging parents receiving cash assistance in work-related activities. States generally met these rates, in part because of adjustments made in the target rates due to the drop in caseloads and other provisions that allowed states to serve some families without work requirements. In strengthening their welfare-to-work programs, states emphasized the importance of work to TANF recipients and paid more attention to case management services, child care and transportation assistance, and other services to help individuals, including those who faced some barriers to employment, become job ready. Second, many states took steps to help parents who had left the welfare rolls for employment, often by continuing to provide child care assistance, sometimes using TANF funds to supplement other federal funds used for child care subsidies for low-income parents. Our work has shown that many former welfare recipients work in low-wage jobs with limited benefits and that continued assistance, such as child care subsidies, can help them maintain their jobs. Third, states also used TANF and MOE funds to provide a range of services to families that had not previously received cash welfare payments. These services can include onetime payments to families in need, such as for rent payments that might help keep them off the welfare rolls. Some states increased efforts to promote healthy marriages and two-parent families. All of these uses of TANF and MOE funds are generally considered in keeping with the broad goals established in the legislation. As specified by law, the purpose of TANF is to provide assistance to needy families so that children may be cared for in their own homes or in the homes of relatives; end the dependence of needy families on government benefits by promoting job preparation, work, and marriage; prevent and reduce the incidence of out-of-wedlock pregnancies; and encourage the formation and maintenance of two-parent families. This shift to aid and services other than cash assistance is mirrored in our analysis of states’ spending patterns for TANF and MOE funds. Figure 6 shows the percentage of TANF and MOE funds (combined) that each state spent in each spending category in 1995, 2000, and 2004. (This figure only includes TANF and MOE spending, in contrast to figure 4, which showed the percentage of total federal and state low-income spending that each state spent in each category.) For example, figure 6 shows that California spent more than 90 percent of its federal and state AFDC-related funds on cash assistance in 1995 compared with 68 percent of its federal and state TANF-related funds in 2004. As the share of funds devoted to cash assistance declined in that state, the portion devoted to employment services and training, in particular, increased. In seven of the nine states, by 2004, cash assistance spending accounted for 40 percent or less of total TANF and MOE. States varied in how their TANF and MOE funds were distributed among the noncash categories. This shift to noncash assistance was curtailed somewhat from 2000 to 2004, when cash assistance caseloads and related spending increased in several of the states, associated with a contraction of spending for other forms of aid and services, as shown in figure 6. During this period, state officials generally had to make different choices about what services and programs they could support with TANF and MOE funds to ensure they had enough funds to support the core cash assistance program. Some state officials told us that they drew down their TANF balances or reserves to help them maintain service levels. Regarding these TANF balances, most of the nine states followed a pattern of initially building up their TANF balances and then drawing them down in the 2000-2004 time period to help them maintain services, as shown in figure 7. Over the decade, we found that the states used their federal and state TANF-related funds throughout their budgets for low-income individuals, supporting a wide range of state priorities, such as refundable state EICs for the working poor, prekindergarten, child welfare services, mental health, and substance abuse services, among others. While some of this spending, such as that for child care assistance, relates directly to helping cash assistance recipients leave and stay off the welfare rolls, other spending is directed to a broader population and set of state needs. The flexibility afforded states under TANF allows them to use these funds toward their state priorities. Some examples include the following: Oregon—home to a large refugee resettlement population—spent TANF funds on cash benefits and other refugee services. Oregon also spent TANF and MOE funds on emergency assistance for survivors of domestic abuse. New York and Wisconsin use federal TANF or state MOE funds for refundable tax credits. New York has increased the extent to which it counts state spending for the refundable portion of its EIC and dependent care tax credit to help it meet its MOE requirement. Wisconsin has used federal TANF funds to finance the refundable portion of its state EIC that previously had been financed with state funds, as we reported in our earlier report on these states’ use of funds. Michigan uses TANF funds for emergency homeless shelters and programs for runaways. TANF funds are also used for individual development accounts, which provide funds to eligible families to match their own funds to encourage them to save for educational purposes. According to state officials, Texas used MOE funds for prekindergarten for low-income children with low English proficiency. Texas also used TANF funds for an employment retention and advancement program for working people. California counts state funds used for the California Food Assistance Program toward its MOE requirement and uses TANF funds for juvenile probation services and fraud prevention incentive grants to counties. Maryland spent TANF funds through the state Department of Education for the Children At Risk program. According to the Governor’s Budget, this program provides services for pregnant and parenting teenagers and provides funds to reduce the number of students who drop out of school each year, prevent youth suicides, reduce the incidence of child alcohol and drug abuse, and reduce AIDS among students. According to state officials, Louisiana, after initially building up a large TANF balance, took steps from 2002 to 2004 to spend down these funds, in some cases through short-term initiatives to be supported only until funding ran out. Some of these spending initiatives included prekindergarten, which state officials noted is a priority of the governor; funds to address teen pregnancy; and support for child welfare advocates. While current mechanisms in place at the federal level to hold states accountable for their use of federal TANF and state MOE funds provide useful information, these reporting mechanisms still leave significant gaps that hamper oversight. The new federal welfare program goals and fiscal structure established in 1996 entailed substantial changes in federal oversight and reporting mechanisms. At the federal level, HHS is responsible for oversight of the TANF block grant, and states provide several types of information for oversight purposes. Key oversight and reporting mechanisms are expenditure reports on the amount and type of federal and state MOE plans that each state must file with HHS to outline its TANF programs and goals, among other things, for reducing out-of-wedlock pregnancies; annual reports that each state must file with HHS to supplement its state aggregate caseload and individual reporting on demographic and economic circumstances and work activities of individuals receiving TANF cash assistance; single audit reports conducted as part of governmentwide audits of federal aid to nonfederal entities; performance bonuses related to measures of job entry, job retention, and wage growth for TANF recipients and also for reducing out-of- wedlock births; and financial penalties in 14 specified areas, including failure to meet the state MOE requirement and the minimum work participation rates. In addition, HHS funding supports a range of research activities that provide additional information on TANF recipients and other low-income populations. These reporting mechanisms and information sources generally provide useful information on states’ use of TANF and MOE funds, although key information gaps remain. One such gap exists because the key measure of the number of people served through the block grant remains focused on families receiving TANF assistance, defined in TANF regulations as benefits designed to meet a family’s ongoing basic needs, which most typically occurs through receipt of monthly cash assistance. This measure does not provide a complete picture of the number of people receiving other forms of aid or services funded with TANF and MOE funds. In 2002, we estimated that in the 25 states we studied, at least 46 percent more families than are counted in the TANF caseload are provided aid or services with TANF and MOE dollars. In addition, we reported in June 2005 that the lack of information on the numbers of children and families receiving child care subsidies funded by TANF and the types of care received leads to an incomplete picture of the federal role in providing child care subsidies to low-income parents. We already said in that report that Congress may wish to require HHS to find cost-effective ways to address this specific gap to provide additional information of value to policymakers and program managers in ensuring the efficiency, effectiveness, and accountability of federal supports for child care. Additional information on the full range of people served by TANF and MOE funds is essential for a better understanding of the true workload of the grant. Caseload or workload information is important for oversight and policy-making purposes, particularly those related to the amount of and needs associated with the block grant. For example, as the cash assistance caseload declined by more than half nationwide, it raised questions as to whether adjustments were needed to the block grant funding levels. At the same time, because the amount of the block grant has not been adjusted for inflation since its creation in 1996, concerns have been raised about its declining value and the possible impact on meeting needs. Better information could inform these discussions. While having more information on the numbers served is important, it is also critical to make a distinction between those receiving cash assistance and other types of assistance, because different program requirements apply to families in different situations. More specifically, under TANF, families receiving ongoing cash assistance are generally subject to work requirements, time limits, and other requirements, in part to emphasize the transitional nature of assistance and to help ensure that recipients take steps to prepare for work. Those receiving other forms of aid outside of a state’s TANF program through a separate state program, such as working parents receiving child care subsidies, are not subject to requirements such as time limits on aid. Another information gap relates to what services are funded and how those services fit into a strategy or approach for meeting TANF goals. This would include information about intended target populations and the strategy or approach for using the funds to further welfare reform goals. For example, additional information on the extent to which TANF and MOE funds were used to support work requirements for cash assistance recipients is important to understanding the costs of supporting a state’s core TANF program. It is also important to have additional information to better understand the costs involved in providing aid to those transitioning off of welfare and to a more general population, such as for prekindergarten services or to supplement a state’s refundable EIC program. Such information would be useful to congressional policymakers in considering changes to TANF work requirements and implications for the provisions of other services, a key issue in TANF reauthorization deliberations. In creating the TANF block grant, Congress emphasized the importance of state flexibility, and to that end, the legislation restricted HHS regulatory authority over the states except to the extent expressly provided in the law. Regarding collecting additional information about services beyond cash assistance, while HHS has acknowledged the value of having additional information, it has said that it will not collect this information without legislative changes directing it to do so. In any effort to get more information or to increase or revise program and fiscal reporting requirements, important considerations should be taken into account. In our report on the current undercounting of those served by TANF, some state officials raised concerns about the possibility of additional TANF reporting requirements being imposed on states to collect information on families not included in the TANF caseload. These concerns included that (1) states lack the information systems needed to fulfill additional requirements, (2) fulfilling additional requirements will increase administrative costs, (3) additional data collection requirements could deter states and service providers from offering services because they would not want the administrative burden associated with them, and (4) requiring all service recipients to provide personal identifying information for every service may deter some people from accessing services because of the stigma associated with welfare. While many of these concerns are legitimate, they do not necessarily outweigh the importance of getting needed information for oversight and policy making and can be considered in addressing any changes. In addition, there may be a variety of ways to get needed information, some more cost-effective than others, including relying on existing data sources or special studies. Moreover, opportunities may exist to streamline or eliminate some reporting requirements to make way for more relevant ones, as determined by Congress, HHS, and the states. In the past, Congress has included in legislation a requirement that HHS cooperate with states—key stakeholders in welfare reform—in considering aspects of monitoring state programs and performance. HHS has worked with state and human services professional organizations to discuss and receive input on information requirements and performance standards in the past. National-level data show that the trend away from cash assistance spending has occurred nationwide. States are using substantial portions of their block grants and MOE funds as large, flexible funding streams to meet their priorities in many areas of their budgets for low-income families, yet much remains unknown at the national level about how these federal TANF and state MOE funds are used to meet the overall goals of welfare reform. Ten years after Congress passed sweeping welfare reforms, much has changed in how federal and state dollars support programs for low-income and at-risk individuals. Some trends raise issues for the future. Overall, spending is up, but state budgets for low-income individuals are increasingly dominated by health care spending. To the extent that this trend continues or becomes more pronounced, it warrants attention as to its effect on state spending to meet other needs of low-income individuals. Another key trend was the shift in nonhealth spending priorities away from cash assistance to greater emphasis on supporting low-income individuals’ work efforts. However, the greatest increases came right after welfare reform during the strong economy, while some contraction in spending was apparent in the latter period. This raises questions about the sustainability of this shift. In addition, in the new welfare environment, too much remains unknown about how TANF block grant funds are spent to meet welfare goals. A natural tension exists with block grants that is not easily addressed. A key challenge is to strike an appropriate balance between flexibility for states and accountability for federal goals. This is particularly important given the large dollar amount of the TANF block grant—over $16 billion in federal funds annually. With the current accountability and reporting structure for TANF, the information gaps hamper decision makers in making informed choices about how best to spend federal funds to assist vulnerable populations cost effectively. At the same time, consideration needs to be given to collecting needed information in a way that minimizes reporting burden and acknowledges the importance of flexibility in addressing state and local needs. To better inform its oversight and decision-making process, Congress should consider ways to address two key information gaps for the TANF block grant: (1) insufficient information on the numbers served by TANF funds and (2) limited information on how funds are used—for example, on which target populations and as part of what strategies and approaches— to meet TANF goals. Efforts to obtain more information must take into account how to do so in the most cost-effective and least burdensome way. Some options include Congress directing the Secretary of HHS to require states to include more information in state TANF plans filed with HHS on their strategies and approaches for using funds; require states to include more information on all aspects of TANF spending in the annual reports they must file with HHS; and revise other reporting requirements regarding the uses and recipients of TANF-related funds. Congress may wish to require the Secretary to consult with key welfare reform stakeholders in assessing and revising reporting requirements or information-gathering strategies. We provided a draft of this report to HHS for review. In its written comments, which appear in appendix VI, HHS agreed that additional information on states’ use of TANF funds would be valuable and that expanded data collection requirements should be done in a cost-effective manner and in consultation with stakeholders. HHS also provided technical comments that we incorporated where appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of Health and Human Services, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact David D. Bellis at (415) 904-2272 or Stanley J. Czerwinski at (202) 512-6520. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. In order to provide information on welfare-related spending over the decade since welfare reform, we designed our study to (1) examine changes in the overall level of welfare-related spending for nonhealth and health services in the periods before and after the recession in 2001 and over the decade since 1995, (2) examine changes in spending priorities for nonhealth welfare-related services during the same time periods, and (3) review the contribution of Temporary Assistance to Needy Families (TANF) funds to states’ spending for welfare-related services. To address these objectives, we used a survey instrument to collect state spending data from state budget and program officials in nine states examined in our prior reports; conducted site visits in these nine states; and reviewed information available from prior GAO work, relevant federal agencies, and other organizations. The nine states in our study—California, Colorado, Louisiana, Maryland, Michigan, New York, Oregon, Texas, and Wisconsin—represent a diverse set of socioeconomic characteristics, geographic regions, population sizes, and experiences with state welfare initiatives. For the purposes of this report, we focused on spending for working-age adults and children and excluded spending for the elderly, long-term care, and institutional care. The term welfare-related refers to spending for low-income and at-risk individuals, including TANF-eligible and non-TANF eligible individuals. Because our focus was on states’ budgetary decisions, we excluded federal program spending about which states do not make key budget decisions, such as food stamp benefits, the Earned Income Tax Credit, Supplemental Security Income, and other programs; as a result, our data do not capture all federal spending for low- income individuals. To obtain data on welfare-related spending over the decade since welfare reform, we asked state budget and program officials from each state’s central budget office and relevant state agencies to identify welfare-related spending data using the same survey instrument and criteria used in our prior report. (See app. V.) We worked closely with state officials to complete the survey during our site visits and through numerous telephone and e-mail contacts. Because parts of the survey were completed by different state officials, we also provided the states with the data we compiled for their review as well as data summaries of our analysis. We collected budget data and program information for three points in time based on state fiscal years: for 1995 before the passage of federal welfare reform legislation; for 2000; and for 2004, the most recent year for which data were available. Consistent with our prior methodology, we used the survey to take a comprehensive look at state social service program budgets by encouraging states to provide spending data on a broad array of programs, rather than just those programs that received federal TANF funding. Our study includes federal, state, and local spending associated with Medicaid, TANF, housing assistance, child care and welfare, and a myriad of other programs aimed at needy populations and for which states make key budgetary decisions. State budget structures differ across states. Some states in our analysis used biennial budgets, others used annual budgets. States can place employment and training programs primarily in their social services departments; other states can place these programs in their economic development departments. Some states place responsibility for welfare programs with county governments. These differences make comparisons of state budgets and spending difficult. In asking states to report spending on individual programs, regardless of which state agency oversaw these programs, and then aggregating the spending into the same categories for each state, we were able to compare state spending trends across all of the states. As figure 8 shows, we classified spending data in several key ways, including nonhealth spending—cash assistance (Category 1), employment services and training (Category 2), work and other supports (Category 3), and aid for the at-risk (Category 4)—and health spending (Category 5), which we generally separated from nonhealth spending in our analysis. Our first spending category includes state spending for ongoing cash assistance payments with federal or state moneys under the Aid for Families with Dependent Children (AFDC), TANF, or other state programs. This category corresponds most closely with traditional monthly cash assistance payments under the AFDC program. Our second spending category includes spending for job and training programs that seek to prepare people for employment. Our third spending category includes programs that seek to support low-income people with other forms of aid or services, including helping families move from welfare to work or avoid welfare altogether. For example, child care subsidies and rental assistance payments can help parents remain employed even if they are working in low-wage jobs. Our fourth spending category recognizes the range of programs that states can use to develop strategies to achieve TANF’s goals. These spending areas include child welfare programs, substance abuse programs, mental health programs, and programs that help the developmentally disabled attain a level of self-sufficiency, and exclude spending for any individuals in institutions. While many of these state spending areas may not have income standards to determine eligibility, a state can claim TANF funds for expenditures in these areas if the state is able to certify that participants in these programs meet the eligibility requirements set forth in the state’s TANF plan. Our fifth spending category includes spending for health services aimed at low-income people but excludes spending for the elderly, long-term care, and institutional care. Analyzing health care spending helps recognize a state’s true and substantial investment in spending to support these low-income and needy populations. In general, our spending categories were designed to cover all areas of a state’s budget associated with the TANF-eligible population and allowable expenses under TANF as well as for other low-income children and individuals of working age. We analyzed state spending of both federal and state funds on a wide array of programs aimed at providing services to the needy and that flowed through the state budget. In this analysis, federal spending is not defined by the level of a federal grant allocated to a state, but rather by how much of the grant the state chooses—or in some cases is required—to spend on a particular activity. For this reason we did not consider a number of 100 percent federally funded programs that do not flow through the state budget. For example, the food stamp program is administrated by the state and the shared administrative costs are included in the survey, but the value of the food stamp coupons disbursed in the fiscal year, borne 100 percent by the federal government, is not. Likewise, if a state budget action prompted local spending in these areas, through incentives like a state- local match, then local spending was included in our analysis. We converted state spending data to real 2004 dollars in order to make our spending more comparable over time. To obtain program description and recipient eligibility information on the spending data we collected, we also spoke with budget and program officials in these nine states knowledgeable about state TANF programs, Medicaid programs, and other state programs supporting the spending we captured in our survey. We also gathered information about the fiscal and economic environment in each state since state fiscal year (SFY) 2000, the last data year in our prior report, and a period that included a national recession in 2001. We worked closely with state officials to complete the survey. Once the state program and budget officials identified the program spending to include in the survey, we verified through program documentation and discussions with these state officials that the program descriptions, targeted beneficiaries, and program goals met the survey criteria. To obtain information about policy and program developments for welfare and other related program spending data collected in our survey, we reviewed reports and information readily available from our prior work, relevant federal agencies, state governments, and local advocacy groups. We took several steps to determine the completeness and accuracy of data obtained from states. We reviewed related documentation and examined the data for obvious omissions and errors and to have reasonable assurance that the spending data were comparable over the three years in our analysis. We also collected information and audit reports on the systems state officials used to provide state spending data. We did not test the data systems ourselves. In some cases, state auditors found weaknesses with relevant agency data systems or internal controls. However, for the purposes of examining aggregate welfare-related spending across state budgets, and identifying the purposes of spending within these aggregates, we found the survey data we collected to be sufficiently reliable for use in this report. We determined the completeness and accuracy of data obtained from the Department of Health and Human Services (HHS) based on interviews and related documentation and determined that the data were sufficiently reliable for use in this report. 1. Provide expenditures for two state fiscal years (not the federal fiscal year): 1999-2000 and 2003-2004. For most states these would be the fiscal years that ended June 30, 2000 and June 30, 2004; for Texas they ended August 31, 2000 and August 31, 2004; for New York March 31, 2000 and March 31, 2004. Please note that there are four tabs at the bottom of the spreadsheet that identify four separate worksheets to be filled out. There are instructions for the childcare and healthcare surveys on their spreadsheets. Instructions for the social services survey are attached. Identify all state programs serving social service needs that are targeted towards reducing dependence on public assistance. Except where noted, include programs that serve both TANF-eligible and non-TANF-eligible clients. 3. Distribute a copy of the survey to all agencies that oversee these programs. Please explain to these agencies what MOE means and what funds should be shown in the MOE column. You may wish to refer them to the spending guide at http://www.acf.dhhs.gov/programs/ofa/funds2.htm. Include all federal, state and local expenditures that are incorporated in the state budget. For local expenditures, include local spending of locally raised revenue that is incorporated in the state budget, such as a local match. Include expenditures or estimated expenditures only (not amounts budgeted or authorized). Include all expenditures for each program serving social service needs, including but not limited to TANF and MOE expenditures. Include all TANF spending; if some TANF expenditures do not fit into one of the specific program categories, include them in one of the lines labeled “other.” If TANF funds are transferred to SSBG or the CCDF, please place them in those columns (if possible, label them separately from other SSBG or CCDF funds by inserting a row or a note). 6. Please be careful to count expenditures only once! 7. Do not include capital expenditures. 8. Do not include indirect administrative costs or management information systems (MIS) expenditures, but do include direct administrative costs such as case management expenditures in relevant program line items. If it is impossible to break out these direct costs by program, include them in the other lines under the most relevant sub-category. If necessary, estimate the percentage of direct costs that apply to the programs eligible for our survey. Include the costs of fringe benefits for state personnel. (A rough estimate of fringe benefit costs is all that is necessary.) Identify funding streams included in the columns labeled “Other” on a separate worksheet. 9. 10. For columns labeled “SSBG” (Social Services Block Grant): If state officials cannot isolate spending on individual programs, obtain either (1) estimates for these amounts, or (2) totals with an explanation of the general areas in which SSBG funds are spent. For TANF funds transferred from TANF to SSBG, document them as SSBG expenditures with a note on the level attributable to the transfer. 11. Compile and provide copies of all supporting documentation for the data entered in the survey, e.g. expenditure reports, annual financial statements. 12. When possible, identify the caseload and eligibility criteria for each program and provide supporting documentation. Suggested format for data submission: For example, Line 2b, Work Preparation, could include expenditures from several programs across two or more agencies. Each of these agencies would complete the survey as well as provide the supporting documentation. The various agencies’ contributions could be compiled and summarized. Line 1b: child support payments. Include all child support collections from non-custodial parents that are passed on to custodial parents who are receiving cash assistance through TANF, in excess of $50 per monthly payment. Line 1c: emergency assistance. Include all expenditures for emergency assistance, including prevention of eviction, utility cut- off, etc. Document, to the extent possible, how emergency assistance funds are allocated. Line 1d: food assistance. Include expenditures on programs designed to provide food or nutritional assistance to low- income people. Do not include any 100% federally funded program such as free or reduced- school breakfast or lunch programs. Include, however, the state and federal expenditures on administrative expenses for those programs and any state supplemental programs. Line 1e: housing assistance. Include expenditures on programs designed to provide housing assistance to low-income people, such as vouchers, state low-income housing tax credits, or any other state support for low-income housing efforts. Line 1f: SSI supplements. Include expenditures on state supplementation of the federal Supplemental Security Income program. Do not include federal expenditures. Line 1g: other. Include expenditures on any other programs related to poverty relief that are not included above. Describe such programs on an attached sheet. Line 2: Work Preparation and Education Include expenditures in this category on lines 2a-2c below. Line 2a: education and training. In this instance, limit spending to TANF-eligible people. Line 2b: work preparation. Include expenditures on programs to prepare low-income people who are not yet working with skills to make them employable. Examples include skills development programs, community service placements, and Workforce Investment Act programs. Do not include expenditures on people who are in the paid workforce. Line 2c: other. Include expenditures on any other programs related to work preparation and support that are not included above. Describe such programs on an attached sheet. Line 3: Employment Support Include expenditures in this category on lines 3a-3f below. Line 3a: post-employment services. Include expenditures on programs designed to keep people employed after they have found employment. Examples include coaching to ensure that individuals arrive at work on time, counseling to address problems that may arise in the workplace, and any other case management services for this working population. If known, include spending for on-the-job training. Line 3b and 3c: state EITC. Include expenditures on state earned income tax credits paid to families. Include state and local tax credits that are designed to defray the costs of employment for low-income families. On line 3c, do not include foregone state revenues as an expenditure. Line 3f: other. Include expenditures on any other programs related to employment support that are not included above. Describe such programs on an attached sheet. Line 4: Poverty Prevention Include expenditures in this category on lines 4a-4c below. Line 4c: other. Include expenditures on any other programs related to poverty prevention that are not included above. Describe such programs on an attached sheet. Line 5: Child Protection/Juvenile Justice Include expenditures in this category on lines 5a-5c below. Line 5b: juvenile justice programs. Include expenditures on social services programs for youth who have violated the state juvenile code. Do not include institutional spending. Line 5c: other. Include expenditures on any other programs related to child protection/juvenile justice that are not included above. Describe such programs on an attached sheet. Line 6: Other Include expenditures in this category on lines 6a-6d below. Line 6a: substance abuse prevention and treatment. Include expenditures on programs aimed to prevent alcohol, drug and tobacco abuse and to provide intervention services to individuals with alcohol, drug and/or tobacco dependency in their families. Examples of prevention programs are media campaigns, educational programs and community-based planning programs. Examples of expenditures on treatment include counseling, short-term inpatient treatment facilities, and outpatient medical care. Line 6b: developmental disabilities. Include expenditures on programs that provide services to individuals with developmental disabilities and their families, including outpatient care and public education, but excluding institutional facilities. Line 6c: mental health services. Include expenditures on programs that provide prevention and/or intervention services to the mentally ill and their families, including community-based treatment facilities, outpatient care and public education. Exclude all expenditures provided at/through mental health institutions. Line 6d: other Include expenditures on any other programs that are not included above. Describe such programs on an attached sheet. TANF: Temporary Assistance for Needy Families SSBG: Social Services Block Grant, Title XX of the Social Security Act TANF-MOE: TANF Maintenance of Effort. See your state TANF director or http://www.acf.dhhs.gov/programs/ofa/funds2.htm Instructions for Healthcare Coverage Spending Survey: Include expenditures on any healthcare program, in-home or out-of-home, aimed at low-income working or non-working people and their children, excluding long-term care. Include programs for both the TANF-eligible and non-TANF-eligible population, but exclude all programs for seniors. Identify each program in the spaces below and their funding streams. Identify eligibility criteria for these programs, as well as caseloads on an attached sheet. For Medicaid-funded programs, identify target populations (e.g. "transitional assistance," "expansion population") where possible. State expenditures should capture local spending if it flows through the state budget (e.g. a local match). Instructions for Child Care/Child Development: Include expenditures on any child care or child development program, either custodial or educational, in-home or out-of-home, aimed at low-income working or non-working people, including pre-K programs, after-school programs, vouchers for child care, state expenditures on Head Start, and subsidies to child care centers, and child care tax credits (if available). Include programs for both TANF-eligible and non-TANF-eligible people. Please identify each child care/child development program in the spaces below and identify their funding streams. Please identify eligibility criteria for these programs, as well as caseloads (numbers of children, not families, if possible), on an attached sheet. In addition to the contacts named above, Paul Posner, Gale Harris, Tom James, Sandra Beattie, Rebecca Hargreaves, Cheri Harrington, Dorian Herring, Brittni Milam, and Keith Slade made key contributions to this report. In addition, Gregory Dybalski and Jerry Fastrup provided key analytical and technical support; Wesley Dunn provided legal support; and Katherine Bittinger, Allen Chan, Reid Jones, Tahra Nichols, Rudy Payan, John Rose, and Suzanne Sterling-Olivieri assisted with fieldwork in states.
Under the Temporary Assistance for Needy Families (TANF) block grant created as part of the 1996 welfare reforms, states have the authority to make key decisions about how to allocate federal and state funds to assist low-income families. States also make key decisions, through their budget processes, about federal and state funds associated with other programs providing assistance for the low-income population. States' increased flexibility under TANF as well as the budgetary stresses they experienced after a recession draw attention to the fiscal partnership between the federal government and states. To update GAO's previous work, this report examines (1) changes in the overall level of welfare-related spending; (2) changes in spending priorities for welfare-related nonhealth services; and (3) the contribution of TANF funds to states' spending for welfare-related services. GAO reviewed spending in nine states for state fiscal years 1995, 2000, and 2004 and focused on spending for working-age adults and children, excluding the elderly, long-term and institutional care. GAO found that spending for low-income people for health and nonhealth services in nine states generally increased in real terms from 1995 to 2000 and from 2000 to 2004. Health spending, excluding spending for the elderly, outpaced nonhealth spending over the decade and now consumes an even greater share of total spending for low-income people, mirroring a nationwide expansion in health care costs. Spending increases were substantially supported by both federal and state funds in the health and nonhealth areas in each time period, reflecting the important federal-state partnership supporting these low-income programs. Overall, spending increases reflected changes in eligible populations and needs, increasing costs, as well as policy changes. While nonhealth spending increased in real terms, spending priorities shifted away from cash assistance to other forms of aid, particularly work supports, in keeping with welfare reform goals. The largest increases for noncash services occurred from 1995 to 2000, with smaller increases from 2000 to 2004, when some state officials cited challenges in maintaining services. By 2004, states used federal and state TANF funds to support a broad range of services, in contrast to 1995 when spending priorities focused more on cash assistance. However, reporting and oversight mechanisms have not kept pace with the evolving role of TANF funds in state budgets, leaving information gaps at the national level related to numbers served and how states use funds to meet welfare reform goals, hampering oversight. Any efforts to address these gaps should strike an appropriate balance between flexibility for state grantees and accountability for federal funds and goals.
The Social Security Act of 1935 authorized SSA to establish a record- keeping system to help manage the Social Security program, and this resulted in the creation of the SSN. Through a process known as enumeration, unique numbers are created for every person as a work and retirement benefit record for the Social Security program. SSA generally issues SSNs to most U.S. citizens, and SSNs are also available to noncitizens lawfully admitted to the United States with permission to work. SSA estimates that approximately 277 million individuals currently have SSNs. The SSN has become the identifier of choice for government agencies and private businesses, and thus it is used for a myriad of non- Social Security purposes. The growth in the use of SSNs is important to individual SSN holders because these numbers, along with names and birth certificates, are among the three personal identifiers most often sought by identity thieves. In addition, SSNs are used as breeder information to create additional false identification documents, such as drivers’ licenses. Recent statistics collected by federal agencies and CRAs indicate that the incidence of identity theft appears to be growing. The Federal Trade Commission (FTC), the agency responsible for tracking identity theft, reported that consumer fraud and identity theft complaints grew from 404,000 in 2002 to 516,740 in 2003. In 2003, consumers also reported losses from fraud of more than $437 million, up from $343 million in 2002. In addition, identity crimes account for over 80 percent of SSN misuse allegations according to the SSA. Also, officials from two of the three national CRAs report an increase in the number of 7-year fraud alerts placed on consumer credit files, which they consider to be reliable indicators of the incidence of identity theft. Law enforcement entities report that identity theft is almost always a component of other crimes, such as bank fraud or credit card fraud, and may be prosecuted under the statutes covering those crimes. Private sector entities such as information resellers, CRAs, and health care organizations routinely obtain and use SSNs. Such entities obtain the SSNs from various public sources and their business clients wishing to use their services. We found that these entities usually use SSNs for various purposes, such as to build tools that verify an individual’s identity or match existing records. Certain federal laws have limited the disclosures private sector entities are allowed to make to their customers, and some states have also enacted laws to restrict the private sector’s use of SSNs. Private sector entities such as information resellers, CRAs, and health care organizations generally obtain SSNs from various public and private sources and use SSNs to help identify individuals. Of the various public sources available, large information resellers told us they obtain SSNs from various records displayed to the public such as records of bankruptcies, tax liens, civil judgments, criminal histories, deaths, real estate ownership, driving histories, voter registrations, and professional licenses. Large information resellers said that they try to obtain SSNs from public sources where possible, and to the extent public record information is provided on the Internet, they are likely to obtain it from such sources. Some of these officials also told us that they have people that go to courthouses or other repositories to obtain hard copies of public records. Additionally, they obtain batch files of electronic copies of all public records from some jurisdictions. Given the varied nature of SSN data found in public records, some reseller officials said they are more likely to rely on receiving SSNs from their business clients than they are from obtaining SSNs from public records. These entities obtain SSNs from their business clients, who provide SSNs in order to obtain a reseller’s services or products, such as background checks, employee screening, determining criminal histories, or searching for individuals. Large information resellers also obtain SSN information from private sources. In many cases such information was obtained through review of data where a customer has voluntarily supplied information resellers with information about himself or herself. In addition, large reseller officials said they also use their clients’ records in instances where the client has provided them with information. We also found that Internet-based resellers rely extensively on public sources and records displayed to the public. These resellers listed on their Web sites public information sources, such as newspapers, and various kinds of public record sources at the county, state, and national levels. During our investigation, we determined that once Internet-based resellers obtained an individual’s SSN they relied on information in public records to help verify the individual’s identity and amass information around the individual’s SSN. Like information resellers, CRAs also obtain SSNs from public and private sources as well as from their customers or the businesses that furnish data to them. CRA officials said that they obtain SSNs from public sources, such as bankruptcy records, a fact that is especially important in terms of determining that the correct individual has declared bankruptcy. CRA officials also told us that they obtain SSNs from other information resellers, especially those that specialize in obtaining information from public records. However, SSNs are more likely to be obtained from businesses that subscribe to their services, such as banks, insurance companies, mortgage companies, debt collection agencies, child support enforcement agencies, credit grantors, and employment screening companies. Individuals provide these businesses with their SSNs for reasons such as applying for credit, and these businesses voluntarily report consumers’ charge and payment transactions, accompanied by SSNs, to CRAs. We found that health care organizations were less likely to rely on public sources for SSN data. Health care organizations obtain SSNs from individuals themselves and from companies that offer health care plans. For example, subscribers or policyholders provide health care plans with their SSNs through their company or employer group when they enroll in health care plans. In addition to health care plans, health care organizations include health care providers, such as hospitals. Such entities often collect SSNs as part of the process of obtaining information on insured people. However, health care officials said that, particularly with hospitals, the medical record number rather than the SSN is the primary identifier. Information resellers, CRAs, and health care organization officials all said that they use SSNs to verify an individual’s identity. Most of the officials we spoke to said that the SSN is the single most important identifier available, mainly because it is truly unique to an individual, unlike an individual’s name and address, which can often change over an individual’s lifetime. Large information resellers said that they generally use the SSN as an identity verification tool. Some of these entities have incorporated SSNs into their information technology, while others have incorporated SSNs into their clients’ databases used for identity verification. For example, one large information reseller that specializes in information technology solutions has developed a customer verification data model that aids financial institutions in their compliance with some federal laws regarding “knowing your customer.” We also found that Internet-based information resellers use the SSN as a factor in determining an individual’s identity. We found these types of resellers to be more dependent on SSNs than the large information resellers, primarily because their focus is more related to providing investigative or background-type services to anyone willing to pay a fee. Most of the large information resellers officials we spoke to said that although they obtain the SSN from their business clients, the information they provide back to their customers rarely contains the SSN. Almost all of the officials we spoke to said that they provide their clients with a truncated SSN, an example of which would be xxx-xx-6789. CRAs use SSNs as the primary identifier of individuals, which enables them to match the information they receive from their business clients with the information stored in their databases on individuals. Because these companies have various commercial, financial, and government agencies furnishing data to them, the SSN is the primary factor that ensures that incoming data is matched correctly with an individual’s information on file. For example, CRA officials said they use several factors to match incoming data with existing data, such as name, address, and financial account information. If all of the incoming data, except the SSN, match with existing data, then the SSN will determine the correct person’s credit file. Given that people move, get married, and open new financial accounts, these officials said that it is hard to distinguish among individuals. Because the SSN is the one piece of information that remains constant, they said that it is the primary identifier that they use to match data. Health care organizations also use the SSN to help verify the identity of individuals. These organizations use SSNs, along with other information, such as name, address, and date of birth, as a factor in determining a member’s identity. Health care officials said that health care plans, in particular, use the SSN as the primary identifier of an individual, and it often becomes the customer’s insurance number. Health care officials said that they use SSNs for identification purposes, such as linking an individual’s name to an SSN to determine if premium payments have been made. They also use the SSN as an online services identifier, as an alternative policy identifier, and for phone-in identity verification. Health care organizations also use SSNs to tie family members together where family coverage is used, to coordinate member benefits, and as a cross- check for pharmacy transactions. Health care industry association officials also said that SSNs are used for claims processing, especially with regard to Medicare. According to these officials, under some Medicare programs, SSNs are how Medicare identifies benefits provided to an individual. Certain federal and state laws have placed restrictions on certain private sector entities use and disclosure of consumers’ personal information that includes SSNs. Such laws include the Fair Credit Reporting Act (FCRA), the Gramm-Leach-Bliley Act (GLBA), the Drivers Privacy Protection Act (DPPA), and the Health Insurance Portability and Accountability Act (HIPAA). As shown in table 1, the laws either restrict the disclosures that entities such as information resellers, CRAs, and health care organizations are allowed to make to specific purposes or restrict whom they are allowed to give the information to. Moreover, as shown in table 1, these laws focus on limiting or restricting access to certain personal information and are not specifically focused on information resellers. See appendix I for more information on these laws. We reviewed selected legislative documents of 18 states and found that at least 6 states have enacted their own legislation to restrict either the display or use of SSNs by the private sector. Notably, in 2001, California enacted Senate Bill (SB) 168, restricting private sector use of SSNs. Specifically, this law generally prohibits companies and persons from certain uses such as, posting or publicly displaying SSNs and printing SSNs on cards required to access the company’s products or services. Furthermore, in 2002, shortly after the enactment of SB 168, California’s Office of Privacy Protection published recommended practices for protecting the confidentiality of SSNs. These practices were to serve as guidelines to assist private and public sector organizations in handling SSNs. Similar to California’s law, Missouri’s law (2003 Mo. SB 61), which is not effective until July 1, 2006, bars companies from requiring individuals to transmit SSNs over the Internet without certain safety measures, such as encryption and passwords. However, while SB 61 prohibits a person or private entity from publicly posting or displaying an individual’s SSN “in any manner,” unlike California’s law, it does not specifically prohibit printing the SSN on cards required to gain access to products or services. In addition, Arizona’s law (2003 Ariz. Sess. Laws 137), effective January 1, 2005, restricts the use of SSNs in ways very similar to California’s law. However, in addition to the private sector restrictions, it adds certain restrictions for state agencies and political subdivisions. For example, state agencies and political subdivisions are prohibited from printing an individual’s SSN on cards and certain mailings to the individual. Last, Texas prohibits the display of SSNs on all cards, while Georgia and Utah’s laws are directed at health insurers and, therefore, pertain primarily to insurance identification cards. None of these three laws contain the provisions mentioned above relating to Internet safety measures and mailing restrictions. Table 2 lists states that have enacted legislation and related provisions. Agencies at all levels of government frequently obtain and use SSNs. A number of federal laws require government agencies to obtain SSNs, and these agencies use SSNs to administer their programs, verify applicants’ eligibility for services and benefits, and do research and evaluation. In addition, given the open nature of certain government records, SSNs appear in some records displayed to the public. Given the potential for misuse, some government agencies are taking steps to limit their use and display of SSNs and prevent the proliferation of false identities. Government agencies obtain SSNs because a number of federal laws and regulations require certain programs and federally funded activities to use the SSN for administrative purposes. Such laws and regulations require the use of the SSN as an individual’s identifier to facilitate automated exchanges that help administrators enforce compliance with federal laws, determine eligibility for benefits, or both. For example, the Internal Revenue Code and regulations, which govern the administration of the federal personal income tax program, require that individuals’ SSNs serve as taxpayer identification numbers. A number of other federal laws require program administrators to use SSNs in determining applicants’ eligibility for federally funded benefits. The Social Security Act requires individuals to provide their SSNs in order to receive benefits under the SSI, Food Stamp, Temporary Assistance for Needy Families, and Medicaid programs. In addition, the Commercial Motor Vehicle Safety Act of 1986 requires the use of SSNs to identify individuals and established the Commercial Driver’s License Information System, a nationwide database where states may use individuals’ SSNs to search the database for other state-issued licenses commercial drivers may hold. Federal law also requires the use of SSNs in state child support programs to help states locate noncustodial parents, establish and enforce support orders, and recoup state welfare payments from parents. The law also requires states to record SSNs on many other state documents, such as professional, occupational, and marriage licenses; divorce decrees; paternity determinations; and death certificates. Government agencies use SSNs for a variety of reasons. We found that most of these agencies use SSNs to administer their programs, such as to identify, retrieve, and update their records. In addition, many agencies also use SSNs to share information with other entities to bolster the integrity of the programs they administer. As unique identifiers, SSNs help ensure that the agency is obtaining or matching information on the correct person. Government agencies also share information containing SSNs for the purpose of verifying an applicant’s eligibility for services or benefits, such as matching records with state and local correctional facilities to identify individuals for whom the agency should terminate benefit payments. SSNs are also used to ensure program integrity. Agencies use SSNs to collect delinquent debts and even share information for this purpose. In addition, SSNs are used for statistics, research, and evaluation. Agencies responsible for collecting and maintaining data for statistical programs that are required by statute, make use of SSNs. In some cases, these data are compiled using information provided for another purpose. For example, the Bureau of the Census prepares annual population estimates for states and counties using individual income tax return data linked over time by SSN to determine immigration rates between localities. SSNs also provide government agencies and others with an effective mechanism for linking data on program participation with data from other sources to help evaluate the outcomes or effectiveness of government programs. In some cases, records containing SSNs are sometimes matched across multiple agency or program databases. Government agencies also use employees’ SSNs to fulfill some of their responsibilities as employers. For example, personnel departments of these agencies use SSNs to help them maintain internal records and provide employee benefits. In addition, employers are required by law to use employees’ SSNs when reporting wages. Wages are reported to SSA, and the agency uses this information to update earnings records it maintains for each individual. The Internal Revenue Service (IRS) also uses SSNs to match the employer wage reports with amounts individuals report on personal income tax returns. Federal law also requires that states maintain employers’ reports of newly hired employees, identified by SSNs. States must forward this information to a national database that is used by state child support agencies to locate parents who are delinquent in child support payments. Finally, SSNs appear in some government records that are open to the public. For example, SSNs may already be a part of a document that is submitted to a recorder for official preservation, such as veterans’ discharge papers. Documents that record financial transactions, such as tax liens and property settlements, also contain SSNs to help identify the correct individual. Government officials are also required by law to collect SSNs in numerous instances, and some state laws allow government entities to collect SSNs on voter registries to help avoid duplicate registrations. In addition, courts at all three levels of government also collect and maintain records that are routinely made available to the public. SSNs appear in court documents for a variety of reasons such as on documents that government officials create like criminal summonses, and in many cases, SSNs are already a part of documents that are submitted by attorneys or individuals as part of the evidence for a proceeding or a petition for an action. In some cases, federal law requires that SSNs be placed in certain records that courts maintain, such as child support orders. Despite the widespread use of SSNs at all levels of government, not all agencies use SSNs. We found that some agencies do not obtain, receive, or use SSNs of program participants, service recipients, or individual members of the public. Moreover, not all agencies use the SSN as their primary identification number for record-keeping purposes. These agencies maintain an alternative number that is used in addition to or in lieu of SSNs for certain activities. Some agencies are also taking steps to limit SSNs displayed on documents that may be viewed by others who may not have a need to view this personal information. For example, the Social Security Administration has truncated individuals’ SSNs that appear on the approximately 120 million benefits statements it mails each year. Some states have also passed laws prohibiting the use of SSNs as a student identification number. Almost all states have modified their policies on placing SSNs on state drivers’ licenses. At the federal level, SSA has taken steps in its enumeration process and verification service to help prevent SSNs from being used to proliferate false identities. SSA has formed a task force to address weaknesses in its enumeration process and has (1) increased document verifications and developed new initiatives to prevent the inappropriate assignment of SSNs to noncitizens, and (2) undertaken initiatives to shift the burden of processing noncitizen applications from its field offices. SSA also helps prevent the proliferation of false identities through its verification service, which allows state driver licensing agencies to verify the SSN, name, and date of birth of customers with SSA’s master file of Social Security records. Finally, SSA has also acted to correct deficiencies in its information systems’ internal controls. These changes were made in response to the findings of an independent audit that found that SSA’s systems were exposed to both internal and external intrusion, increasing the possibility that sensitive information such as SSNs could be subject to unauthorized access, modification, and disclosure, as well as the risk of fraud. With regard to the courts, in a prior report we suggested that Congress consider addressing SSN security and display issues in state and local government and in public records, including those maintained by the judicial branch of government at all levels. We proposed that Congress convene a representative group of officials from all levels of government to develop a unified approach to safeguard SSNs used in all levels of government and particularly those displayed in public records. Public and private entities use SSNs for many legitimate and publicly beneficial purposes. However, the more frequently SSNs are obtained and used, the more likely they are to be misused. Individuals may voluntarily provide their SSNs to the private and public sectors to obtain services, but they should be able to be confident that their personal information is safe and secure. As we continue to learn more about the entities that obtain SSNs and the purposes for which they obtain them, policy makers will be able to determine if there are ways to limit access to this valuable piece of information and prevent it from being misused. However, restrictions on access or use may make it more difficult for businesses and government agencies to verify an individual’s identity. Accordingly, policy makers will have to balance the potential benefits of restrictions on the use of SSNs on the one hand with the impact on legitimate needs for the use of SSNs on the other. We are continuing our work on protecting the privacy of SSNs in the private and public sectors, and we are pleased that this Subcommittee is considering this important policy issue. That concludes my testimony, and I would be pleased to respond to any questions the subcommittee has. For further information regarding this testimony, please contact Barbara D. Bovbjerg, Director or Tamara Cross, Assistant Director at (202) 512- 7215. GLBA requires companies to give consumers privacy notices that explain the institutions’ information-sharing practices. In turn, consumers have the right to limit some, but not all, sharing of their nonpublic personal information. Financial institutions are permitted to disclose consumers’ nonpublic personal information without offering them an opt-out right in the following circumstances: to effect a transaction requested by the consumer in connection with a financial product or service requested by the consumer; maintaining or servicing the consumer’s account with the financial institution or another entity as part of a private label credit card program or other extension of credit; or a proposed or actual securitization, secondary market sale, or similar transaction; with the consent or at the direction of the consumer; to protect the confidentiality or security of the consumer’s records; to prevent actual or potential fraud, for required institutional risk control or for resolving customer disputes or inquiries, to persons holding a legal or beneficial interest relating to the consumer, or to the consumer’s fiduciary; to provide information to insurance rate advisory organizations, guaranty funds or agencies, rating agencies, industry standards agencies, and the institution’s attorneys, accountants, and auditors; to the extent specifically permitted or required under other provisions of law and in accordance with the Right to Financial Privacy Act of 1978, to law enforcement agencies, self-regulatory organizations, or for an investigation on a matter related to public safety; to a consumer reporting agency in accordance with the Fair Credit Reporting Act or from a consumer report reported by a consumer reporting agency; in connection with a proposed or actual sale, merger, transfer, or exchange of all or a portion of a business if the disclosure concerns solely consumers of such business; to comply with federal, state, or local laws; an investigation or subpoena; or to respond to judicial process or government regulatory authorities. Financial institutions are required by GLBA to disclose to consumers at the initiation of a customer relationship, and annually thereafter, their privacy policies, including their policies with respect to sharing information with affiliates and non-affiliated third parties. Provisions under GLBA place limitations on financial institutions disclosure of customer data, thus affecting some CRAs and information resellers. We found that some CRAs consider themselves to be financial institutions under GLBA. These entities are therefore directly governed by GLBA’s restrictions on disclosing nonpublic personal information to non- affiliated third parties. We also found that some of the information resellers we spoke to did not consider their companies to be financial institutions under GLBA. However, because they have financial institutions as their business clients, they complied with GLBA’s provisions in order to better serve their clients and ensure that their clients are in accordance with GLBA. For example, if information resellers received information from financial institutions, they could resell the information only to the extent that they were consistent with the privacy policy of the originating financial institution. Information resellers and CRAs also said that they protect the use of non- public personal information and do not provide such information to individuals or unauthorized third parties. In addition to imposing obligations with respect to the disclosures of personal information, GLBA also requires federal agencies responsible for financial institutions to adopt appropriate standards for financial institutions relating to safeguarding customer records and information. Information resellers and CRA officials said that they adhere to GLBA’s standards in order to secure financial institutions’ information. The DPPA specifies a list of exceptions when personal information contained in a state motor vehicle record may be obtained and used (18 U.S.C. § 2721(b)). These permissible uses include: for use by any government agency in carrying out its functions; for use in connection with matters of motor vehicle or driver safety and theft; motor vehicle emissions; motor vehicle product alterations, recalls, or advisories; motor vehicle market research activities, including survey research; for use in the normal course of business by a legitimate business, but only to verify the accuracy of personal information submitted by the individual to the business and, if such information is not correct, to obtain the correct information but only for purposes of preventing fraud by pursuing legal remedies against, or recovering on a debt or security interest against, the individual; for use in connection with any civil, criminal, administrative, or arbitral proceeding in any federal, state, or local court or agency; for use in research activities; for use by any insurer or insurance support organization in connection with claims investigation activities; for use in providing notice to the owners of towed or impounded vehicles; for use by a private investigative agency for any purpose permitted under the DPPA; for use by an employer or its agent or insurer to obtain information relating to the holder of a commercial driver’s license; for use in connection with the operation of private toll transportation facilities; for any other use, if the state has obtained the express consent of the person to whom a request for personal information pertains; for bulk distribution of surveys, marketing, or solicitations, if the state has obtained the express consent of the person to whom such personal information pertains; for use by any requester, if the requester demonstrates that it has obtained the written consent of the individual to whom the information pertains; for any other use specifically authorized under a state law, if such use is related to the operation of a motor vehicle or public safety. As a result of DPPA, information resellers said they were restricted in their ability to obtain SSNs and other driver license information from state motor vehicle offices unless they were doing so for a permissible purpose under the law. These officials also said that information obtained from a consumer’s motor vehicle record has to be in compliance with DPPA’s permissible purposes, thereby restricting their ability to resell motor vehicle information to individuals or entities not allowed to receive such information under the law. Furthermore, because DPPA restricts state motor vehicle offices’ ability to disclose driver license information, which includes SSN data, information resellers said they no longer try to obtain SSNs from state motor vehicle offices, except for permissible purposes. The HIPAA privacy rule also defines some rights and obligations for both covered entities and individual patients and health plan members. Some of the highlights are: Individuals must give specific authorization before health care providers can use or disclose protected information in most nonroutine circumstances, such as releasing information to an employer or for use in marketing activities. Covered entities will need to provide individuals with written notice of their privacy practices and patients’ privacy rights. The notice will contain information that could be useful to individuals choosing a health plan, doctor, or other service provided. Patients will be generally asked to sign or otherwise acknowledge receipt of the privacy notice. Covered entities must obtain an individual’s specific authorization before sending them marketing materials. Health care organizations, including health care providers and health plan insurers, are subject to HIPAA’s requirements. In addition to providing individuals with privacy practices and notices, health care organizations are also restricted from disclosing a patient’s health information without the patient’s consent, except for purposes of treatment, payment, or other health care operations. Information resellers and CRAs did not consider themselves to be “covered entities” under HIPAA, although some information resellers said that their customers are considered to be business associates under HIPAA. As a result, they said they are obligated to operate under HIPAA’s standards for privacy protection, and therefore could not resell medical information without having made sure HIPAA’s privacy standards were met. Congress has limited the use of consumer reports to protect consumers’ privacy. All users must have a permissible purpose under the FCRA to obtain a consumer report (15 USC 1681b). These permissible purposes are: as ordered by a court or a federal grand jury subpoena; as instructed by the consumer in writing; for the extension of credit as a result of an application from a consumer or the review or collection of a consumer’s account; for employment purposes, including hiring and promotion decisions, where the consumer has given written permission; for the underwriting of insurance as a result of an application from a consumer; when there is a legitimate business need, in connection with a business transaction that is initiated by the consumer; to review a consumer’s account to determine whether the consumer continues to meet the terms of the account; to determine a consumer’s eligibility for a license or other benefit granted by a governmental instrumentality required by law to consider an applicant’s financial responsibility or status; for use by a potential investor or servicer or current insurer in a valuation or assessment of the credit or prepayment risks associated with an existing credit obligation; and for use by state and local officials in connection with the determination of child support payments, or modifications and enforcement thereof. Under FCRA, Congress has limited the use of consumer reports to protect consumers’ privacy and limits access to credit data to those who have a legally permissible purpose for using the data, such as the extension of credit, employment purposes, or underwriting insurance. However, these limits are not specific to SSNs. All of the CRAs that we spoke to said that they are considered consumer reporting agencies under FCRA. In addition, some of the information resellers we spoke to who handle or maintain consumer reports are classified as CRAs under FCRA. Both CRAs and information resellers said that as a result of FCRAs restrictions they are limited to providing credit data to their customers that have a permissible purpose under FCRA. Consequently, they are restricted by law from providing such information to the general public. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In 1936, the Social Security Administration (SSA) established the Social Security number (SSN) to track workers' earnings for social security benefit purposes. Today, private and public sector entities frequently ask individuals for SSNs in order to conduct their businesses and sometimes to comply with federal laws. Although uses of SSNs can be beneficial to the public, SSNs are also a key piece of information in creating false identities either for financial misuse or for assuming an individual's identity. The retention of SSNs in the public and private sectors can create opportunities for identity theft. In addition, the aggregation of personal information, such as SSNs, in large corporate databases, as well as the public display of SSNs in various records accessed by the public, may provide criminals the opportunity to easily obtain this personal information. Given the heightened awareness of identity crimes, this testimony focuses on describing (1) how private sector entities obtain, use, and protect SSNs, and (2) public sector uses and protections of SSNs. Private sector entities rely extensively on SSNs. We reported early this year that entities such as information resellers, consumer reporting agencies, and health care organizations routinely obtain SSNs from their business clients and public sources, such as government records that can be displayed to the public. These entities then use SSNs for various purposes, such as to verify an individual's identity or to match existing records, and have come to rely on the SSN as an identifier, which helps them determine a person's identity for the purpose of providing the services they offer. There is no single federal law that regulates the overall use or restricts the disclosure of SSNs by private sector entities. However, certain federal laws have helped to place restrictions on the disclosures of personal information private sector entities are allowed to make to their customers, and certain states have enacted laws to restrict the private sector's use of SSNs. Public sector entities also extensively use SSNs. All three levels of government use the SSN to comply with certain federal laws and regulations, as well as for their own purposes. These agencies rely on the SSN to manage records, verify benefit eligibility, collect outstanding debt, and conduct research and program evaluations. In addition, given the open nature of certain government records, SSNs appear in records displayed to the public such as documents that record financial transactions or court documents. Despite the widespread reliance on and use of SSNs, government agencies are taking steps to safeguard the SSN. For example, some agencies are not using the SSN as the primary identification number. In a previous report, we proposed that Congress consider developing a unified approach to safeguarding SSNs used in all levels of government and particularly those displayed in public records, and we continue to believe that this approach has merit. The use of SSNs by both private and public sector entities is likely to continue, but the more frequently SSNs are used, the more likely they are to be misused given the continued rise in identity crimes. In considering restrictions to SSN use, policy makers will have to balance the protections that could occur from such restrictions with legitimate business needs for the use of SSNs.
Legislative and executive branch action has led to a variety of governmentwide and agency-specific initiatives, started and ongoing, to enhance homeland security. Establishment of an Office of Homeland Security and the office’s planned national security strategy represent important governmentwide initiatives to address homeland security concerns. The planned production of new vaccines or expansion of existing vaccines, additional intergovernmental-planning and consequence-management efforts, and enhancements to aviation, seaport, and border security suggest progress in enhancing homeland security. Moreover, Congress appropriated about $19.5 billion in fiscal year 2002 and about another $9.8 billion contained in a $40 billion emergency supplemental budget after September 11 to help address homeland security concerns. The president has requested about $37.7 billion for fiscal year 2003 for homeland security. In October 2001, the president established a single focal point to coordinate efforts to secure the United States from terrorist attacks—the Office of Homeland Security. This is consistent with a recommendation that we had previously made. The office is charged with broad responsibilities including, but not limited to (1) working with federal agencies, state and local governments, and private entities to develop a national strategy and to coordinate implementation of the strategy; (2) overseeing prevention, crisis-management, and consequence- management activities; (3) coordinating threat and intelligence information; (4) reviewing governmentwide budgets for homeland security as well as providing advice to agencies and the Office of Management and Budget on appropriate levels of funding; and (5) coordinating critical infrastructure protection. The office plans to issue its national strategy in July 2002. The strategy is to be “national” in scope not only by including states, localities, and private-sector entities, as well as federal agencies; but also by setting clear objectives for homeland security with performance measures to gauge progress. Also, the plan is to be supported by a crosscutting federal budget plan. In previous work on combating terrorism, we had also recommended that the Federal Bureau of Investigation work with appropriate agencies to develop a national-level threat assessment on terrorist use of weapons of mass destruction. The bureau concurred in July 1999 but never issued the assessment and has now suspended the effort. We continue to believe that the threat assessment is needed. Progress has been made and efforts are continuing to enhance U.S. capability to respond to biological terrorism. Research is underway to enable the rapid identification of biological agents in a variety of settings; develop new or improved vaccines, antibiotics, and antivirals to improve treatment and vaccination for infectious diseases caused by biological agents; and develop and test emergency response equipment such as respiratory and other personal protective equipment. Another initiative includes the production of 155 million doses of smallpox vaccine to bring the total number of doses in the nation’s stockpile to 286 million by the end of 2002, which is enough to protect every U.S. citizen. In addition, the National Institutes of Health plans to award a contract to accelerate development of new vaccines against anthrax. The number of “push packages” in the National Pharmaceutical Stockpilewill increase from 8 to 12. Each push package has quantities of several different antidotes and antibiotics that can treat and protect persons exposed to different biological and chemical agents. The push packages are planned to have enough pharmaceuticals to treat 12 million persons for inhalation anthrax as compared to the 2 million that could be treated before the project started. Finally, Mr. Chairman, the concerns you raised prior to September 11, 2001, about accountability over medical supplies, including items from the National Pharmaceutical Stockpile, put responsible agencies on alert, and they have subsequently improved their internal controls for these items so they are current, accounted for, and ready to use. As you know Mr. Chairman, federal, state, and local governments share a responsibility to prepare for a terrorist incident. The first responders to a terrorist incident usually belong to local governments and local emergency response organizations, which include local police and fire departments, emergency medical personnel, and public health agencies. Historically, the federal government has primarily provided leadership, training, and funding assistance. The president’s First Responder Initiative was announced in his State of the Union address of January 29, 2002. The initiative will be led by the Federal Emergency Management Agency, and its proposed fiscal year 2003 budget includes $3.5 billion to provide the first responder community with funds to conduct important planning and exercises, purchase equipment, and train their personnel. At the request of the Subcommittee on Government Efficiency, Financial Management, and Intergovernmental Relations, House Committee on Government Reform, we have begun to examine the preparedness issues confronting state and local governments and will report back to the subcommittee later this year. Progress has been made in addressing aviation security concerns, but significant challenges will need to be confronted later this year to meet established goals and time frames. The Congress passed the Aviation and Transportation Security Act in November 2001, which created the Transportation Security Administration with broad new responsibilities for aviation security. The administration faces the daunting challenge of creating this new organizational structure, which must implement more than two dozen specific actions by the end of 2002. All actions due to date have been completed, but formidable tasks remain. For example, the administration is required to have sufficient explosive detection systems in place to screen all checked baggage at more than 400 airports nationwide by December 31, 2002. As of January 2002, fewer than 170 of these machines had been installed. The administration estimates that about 2,000 additional machines will need to be produced and installed by the end of the year. Concerns have been raised that the vendors will not be able to produce sufficient number of machines to meet the deadline. The administration continues to work to identify ways to fill the gap between the requirement and the production capability, including considering the use of noncertified equipment as an interim measure. Also, the administration needs to hire about 40,000 employees, including more than 30,000 screeners, federal air marshals, and other officials. Achieving this goal presents a big challenge because a significant number of the current screening workforce may not qualify for screening positions. Airport screeners must now be U.S. citizens and be able to speak and read English. For example, currently up to 80 percent of the personnel in these positions at Dulles International Airport in Washington, D.C., do not qualify for employment. While not currently as high-profile as airport security, the vulnerability of major commercial seaports to criminal and terrorist activity has caused concern for many years, and the terrorist attacks on September 11, 2001, elevated those concerns again. Even prior to the attacks, this subcommittee expressed concerns about seaport security and the potential consequences of a terrorist attack on the successful deployment of our military forces. Because of these concerns, you asked us to examine the effectiveness of Department of Defense force protection measures at critical seaports located within the United States and at overseas locations, and we will issue our report to you later this year. As part of our work, some of which I can highlight today, we have observed efforts by the Coast Guard to improve seaport security since the attacks. In order to establish a clear indication of how Coast Guard units and personnel should respond to various threat levels at seaports, the Coast Guard is developing three new maritime security levels. The first level, “new normal,” will encompass a greater level of security effort in the ports, including increased emphasis on security patrols, improved awareness of all activity in and around seaports, and better information about inbound vessels and their cargo. The other two security levels will contain increasingly heightened security measures to be taken if threat conditions escalate. The Coast Guard has also initiated the “sea marshal” program, whereby armed Coast Guard teams are placed aboard select commercial vessels navigating the waters of some of our major ports. A third Coast Guard initiative underway is the development of a vulnerability assessment methodology that the Coast Guard plans to use at more than 50 major U.S. seaports to identify vulnerabilities of critical infrastructure at each port. Congress is considering legislation to enhance seaport security. The port and maritime security legislation, which passed the Senate in December, contains a number of provisions aimed at further improving the state of seaport security. Among these provisions are establishing local port security committees, comprised of a broad range of federal, state, and local governments as well as commercial representatives; requiring vulnerability assessments at major U.S. seaports; developing comprehensive security plans for all waterfront facilities; improving collection and coordination of intelligence; improving training for maritime security professionals; making federal grants for security infrastructure improvements; and preparing a national maritime transportation security plan. Moreover, for fiscal year 2002, Congress appropriated $93.3 million to the Transportation Security Administration for port security assessment and improvements. The Immigration and Naturalization Service (INS) has a number of efforts underway designed to increase border security to prevent terrorists or other undesirable aliens from entering the United States. The service proposes to spend nearly $3 billion on border enforcement in fiscal year 2003, about 75 percent of its total enforcement budget of $4.1 billion. I will describe some of the service’s efforts to increase security at the nation’s ports of entry and between the ports, as well as to coordinate efforts with Canadian authorities to deter illegal entry into Canada or the United States. Currently, the United States does not have a system for identifying who has overstayed their visa, nor a sufficient ability to identify and locate visitors who may pose a security threat. Consequently, INS is developing an entry and exit system to create records for aliens arriving in the United States and match them with those aliens’ departure records. The Immigration and Naturalization Service Data Management Improvement Act of 2000 requires the attorney general to implement such a system at all airports and seaports by the end of 2003, at the 50 land border ports with the greatest numbers of arriving and departing aliens by the end of 2004, and at all ports by the end of 2005. The USA Patriot Act, passed in October 2001, instructs the attorney general and the secretary of state to focus on two new elements in designing an entry and exit system—the development of tamper-resistant documents readable at ports of entry, and the utilization of biometric technology. Legislation now before Congress would go further by making the use of biometrics a requirement in the proposed entry and exit system. Implementing such a system within the mandated deadlines represents a major challenge for the INS. According to INS officials, important policy decisions significantly affecting development, cost, schedule, and operation of an entry and exit system have yet to be made. For example, it has not been decided whether arrival and departure data for Canadian citizens will be recorded in the new system. Currently, Canadian citizens are not required to present documents to enter the United States. The particular biometric identifier to be used, such as a fingerprint or facial recognition, has not been determined. Nor has a decision been made on whether a traveler’s biometric would be checked only upon entry, or at departure, too. The INS’ proposed fiscal year 2003 budget states that INS seeks to spend $380 million on the proposed system in fiscal year 2003. To increase the detection and apprehension of inadmissible aliens, including terrorists, at the nation’s ports of entry, the service seeks to add nearly 1,200 inspectors in fiscal year 2003 to operate more inspection lanes at land ports and air ports of entry, and examine information on arriving passengers in order to identify high-risk travelers. To deter illegal entry between the ports of entry and make our borders more secure, the INS seeks to add an additional 570 Border Patrol agents in fiscal year 2003. In response to the September 11 attack, of the 570 Border Patrol positions, INS now seeks to add 285 agents to the northern border, thereby accelerating a staffing buildup at the northern border. The remaining half will be deployed to the southwest border. This represents a departure from previous decisions to deploy most new agent positions to the southwest border. Along the northern border, the service plans on maintaining an air surveillance program capable of responding 24 hours a day 7 days a week. Plus it plans to complete the installation of 67 automated surveillance systems and begin construction of 44 new systems. In addition, the INS has signed a memorandum of agreement with the Department of Defense allowing about 700 National Guard troops and equipment, such as helicopters, to assist in border enforcement duties for up to 6 months. The agreement allows the use of the troops for such activities as assisting in surveillance, transporting Border Patrol agents, as well as managing traffic at ports of entry. In December 2001, the United States and Canada signed a Smart Border Declaration calling for increased coordination to create a border that facilitates the free flow of people and commerce while maintaining homeland security. The declaration calls for such actions as (1) implementing collaborative systems to identify security risks while expediting the flow of low-risk travelers, (2) identifying persons who pose a security threat before they arrive at North American airports or seaports through collaborative approaches such as reviewing crew and passenger manifests, and (3) establishing a secure system to allow low-risk frequent travelers between the two countries to cross the border more efficiently. The INS and other U.S. and Canadian agencies are in the initial stages of working on developing plans and initiatives to implement the declaration’s objectives. Congress has also acted and provided significant homeland security funds. According to documents supporting the president’s fiscal year 2003 budget request, about $19.5 billion in federal funding for homeland security was enacted in fiscal year 2002. Congress added about $9.8 billion more in an emergency supplemental appropriation of $40 billion following the September 11 attacks. The funds were to be used for a variety of homeland security needs including supporting first responders, defending against biological terrorism, securing U.S. borders, enhancing aviation security, and supporting Department of Defense support to homeland security, among other things. The president has now requested about $37.7 billion for homeland security in his fiscal year 2003 budget request. Our ongoing work indicates that federal agencies, state and local governments, and the private sector are looking for guidance from the Office of Homeland Security on how to better integrate their missions and more effectively contribute to the overarching homeland security effort. In interviews with officials at more than a dozen federal agencies, we found that a broadly accepted definition of homeland security did not exist. Some of these officials believed that it was essential that the concept and related terms be defined, particularly because homeland security initiatives are crosscutting, and a clear definition promotes a common understanding of operational plans and requirements, and can help avoid duplication of effort and gaps in coverage. Common definitions promote more effective agency and intergovernmental operations and permit more accurate monitoring of homeland security expenditures at all levels of government. The Office of Homeland Security may establish such a definition. The Office of Management and Budget believes a single definition of homeland security can be used to enforce budget discipline. Although some agencies are looking to the Office of Homeland Security for guidance on how their agencies should be integrated into the overall security effort and to explain what else they should be doing beyond their traditional missions, they also want their viewpoints incorporated as this guidance evolves. For example, an official at the Centers for Disease Control and Prevention saw the Office of Homeland Security as both providing leadership and getting “everyone to the table” to facilitate a common understanding of roles and responsibilities. State officials told us that they also seek additional clarity on how they can best participate in the planned national strategy for homeland security. The planned national strategy should identify additional roles for state and local governments, but the National Governor’s Association made clear to us that governments oppose mandated participation and prefer broad guidelines or benchmarks. State officials were also concerned about the cost of assuming additional responsibilities, and they plan to rely on the federal government for funding assistance. The National Governors Association estimates fiscal year 2002 state budget shortfalls of between $40 billion and $50 billion, making it increasingly difficult for the states to take on expensive, new homeland security initiatives without federal assistance. As we address the state fiscal issues through grants and other tools, we must (1) consider targeting the funds to states and localities with the greatest need, (2) discourage the replacement of state and local funds with federal funds, and (3) strike a balance between accountability and flexibility. State and local governments believe that to function as partners in homeland security they need better access to threat information. Officials at the National Emergency Management Association, which represents state and local emergency management personnel, stated that such personnel experienced problems receiving critical intelligence information and that this hampered their ability to help pre-empt terrorists before they strike. According to these officials, certain state or local emergency management personnel, emergency management directors, and certain fire and police chiefs hold security clearances granted by the Federal Emergency Management Agency; however, other federal agencies, such as the Federal Bureau of Investigation, do not recognize these clearances. Moreover, the National Governor’s Association said that intelligence sharing is a problem between the federal government and the states. The association explained that most governors do not have a security clearance and, therefore, do not receive classified threat information, potentially impacting their ability to effectively use the National Guard and hampering their emergency preparedness capability. On the other hand, we were told that local Federal Bureau of Investigation offices in most states have a good relationship with the emergency management community and at times shared sensitive information under certain circumstances. The private sector is also concerned about costs, but in the context of new regulations to promote security. In our discussions with officials from associations representing the banking, electrical energy, and transportation sectors, they expressed the conviction that their member companies desire to fully participate as partners in homeland security programs. These associations represent major companies that own infrastructure critical to the functioning of our nation’s economy. For example, the North American Electric Reliability Council is the primary point of contact with the federal government on issues relating to the security of the nation’s electrical infrastructure. It has partnered with the Federal Bureau of Investigation and the Department of Energy to establish threat levels that they in turn share with utility companies within their organization. Such partnerships are essential, but the private sector may be reluctant to embrace them because of concern over new and excessive regulation, although their assets might be better protected. According to National Industrial Transportation League officials, for example, transport companies express a willingness to adopt prudent security measures such as increased security checks in loading areas and security checks for carrier drivers. However, the league is concerned that the cost of additional layers of security could cripple their ability to conduct business and felt that a line has to be drawn between security and the openness needed to conduct business. If it is to be comprehensive, a national strategy should address many of these issues. Once the homeland security strategy is developed, participating public and private sector organizations will need to understand and prepare for their defined roles under the strategy. In that connection, Y2K-style partnerships can be helpful. While the federal government can assign roles to federal agencies under the strategy, it will need to reach consensus with the other levels of government and with the private sector on their roles. As you know Mr. Chairman, the world was concerned about the potential for computer failures at the start of the year 2000, known as Y2K. The recognition of the interconnectedness of critical information systems led to the conclusion that a coordinated effort was needed to address the problem. Consequently, Congress, the administration, federal agencies, state and local governments, and private sector organizations collaborated to address Y2K issues and prevent the potential disruption that could have resulted from widespread computer failure. Similarly, the homeland security strategy is intended to include federal, state, and local government agencies and private sector entities working collaboratively, as they did in addressing Y2K issues. The Y2K task force approach may offer a model for developing the public- private partnerships necessary under a comprehensive homeland security strategy. A massive mobilization with federal government leadership was undertaken in connection with Y2K, which included partnerships with state, local, and international governments and the private sector and effective communication to address critical issues. Government actions went beyond the boundaries of individual programs or agencies and involved governmentwide oversight, interagency cooperation, and cooperation among federal, state, and local governments as well as with private sector entities and even foreign countries. These broad efforts can be grouped into the following five categories: Congressional oversight of agencies to hold them accountable for demonstrating progress to heighten public awareness of the problem. Central leadership and coordination to ensure that federal systems were ready for the date change, to coordinate efforts primarily with the states, and to promote private-sector and foreign-government action. Partnerships within the intergovernmental system and with the private entities, divided into key economic sectors to address such issues as contingency planning. Communications to share information on the status of systems, products, and services, and to share recommended solutions. Human capital and budget initiatives to help ensure that the government could recruit and retain the technical expertise needed to convert systems and communicate with the other partners and to fund conversion operations. As we reported in September 2000, the value of federal leadership, oversight, and partnerships was repeatedly cited as a key to success in addressing Y2K issues at a Lessons Learned summit that was broadly attended by representatives from public and private sector entities. Developing a homeland security plan may require a similar level of leadership, oversight, and partnerships with state and local governments, and the private sector. In addition, as in the case of Y2K efforts, Congressional oversight will be very important in connection with the design and implementation of the homeland security strategy. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions you or members of the subcommittee may have. Please contact me at (202) 512-4300 for more information. Raymond J. Decker, Brian. J. Lepore, Stephen L. Caldwell, Lorelei St. James, Patricia Sari-Spear, Kim Seay, William J. Rigazio, Matthew W. Ullengren, Deborah Colantonio, and Susan Woodward made key contributions to this statement. Homeland Security: Challenges and Strategies in Addressing Short- and Long-Term National Needs (GAO-02-160T, November 7, 2001). Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts (GAO-02-208T, October 31, 2001). Homeland Security: Need to Consider VA’s Role in Strengthening Federal Preparedness (GAO-02-145T, October 15, 2001). Homeland Security: Key Elements of a Risk Management Approach (GAO- 02-150T, October 12, 2001). Homeland Security: A Framework for Addressing the Nation’s Issues (GAO-01-1158T, September 21, 2001). Combating Terrorism: Key Aspects of a National Strategy to Enhance State and Local Preparedness (GAO-02-483T, March 1, 2002). Combating Terrorism: Considerations For Investing Resources in Chemical and Biological Preparedness (GAO-01-162T, October 17, 2001). Combating Terrorism: Selected Challenges and Related Recommendations (GAO-01-822, September 20, 2001). Combating Terrorism: Actions Needed to Improve DOD’s Antiterrorism Program Implementation and Management (GAO-01-909, September 19, 2001). Combating Terrorism: Comments on H.R. 525 to Create a President’s Council on Domestic Preparedness (GAO-01-555T, May 9, 2001). Combating Terrorism: Observations on Options to Improve the Federal Response (GAO-01-660T, April 24, 2001). Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy (GAO-01-556T, March 27, 2001). Combating Terrorism: FEMA Continues to Make Progress in Coordinating Preparedness and Response (GAO-01-15, March 20, 2001). Combating Terrorism: Federal Response Teams Provide Varied Capabilities: Opportunities Remain to Improve Coordination (GAO-01-14, November 30, 2000). Combating Terrorism: Need to Eliminate Duplicate Federal Weapons of Mass Destruction Training (GAO/NSIAD-00-64, March 21, 2000). Combating Terrorism: Observations on the Threat of Chemical and Biological Terrorism (GAO/T-NSIAD-00-50, October 20, 1999). Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attack (GAO/NSIAD-99-163, September 7, 1999). Combating Terrorism: Observations on Growth in Federal Programs (GAO/T-NSIAD-99-181, June 9, 1999). Combating Terrorism: Analysis of Potential Emergency Response Equipment and Sustainment Costs (GAO-NSIAD-99-151, June 9, 1999). Combating Terrorism: Use of National Guard Response Teams Is Unclear (GAO/NSIAD-99-110, May 21, 1999). Combating Terrorism: Observations on Federal Spending to Combat Terrorism (GAO/T-NSIAD/GGD-99-107, March 11, 1999). Combating Terrorism: Opportunities to Improve Domestic Preparedness Program Focus and Efficiency (GAO-NSIAD-99-3, November 12, 1998). Combating Terrorism: Observations on the Nunn-Lugar-Domenici Domestic Preparedness Program (GAO/T-NSIAD-99-16, October 2, 1998). Combating Terrorism: Threat and Risk Assessments Can Help Prioritize and Target Program Investments (GAO/NSIAD-98-74, April 9, 1998). Combating Terrorism: Spending on Governmentwide Programs Requires Better Management and Coordination (GAO/NSIAD-98-39, December 1, 1997). Bioterrorism: The Centers for Disease Control and Prevention’s Role in Public Health Protection (GAO-02-235T, November 15, 2001). Bioterrorism: Review of Public Health and Medical Preparedness (GAO-02- 149T, October 10, 2001). Bioterrorism: Public Health and Medical Preparedness (GAO-02-141T, October 10, 2001). Bioterrorism: Coordination and Preparedness (GAO-02-129T, October 5, 2001). Bioterrorism: Federal Research and Preparedness Activities (GAO-01-915, September 28, 2001). Chemical and Biological Defense: Improved Risk Assessments and Inventory Management Are Needed (GAO-01-667, September 28, 2001). West Nile Virus Outbreak: Lessons for Public Health Preparedness (GAO/HEHS-00-180, September 11, 2000). Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attacks (GAO/NSIAD-99-163, September 7, 1999). Chemical and Biological Defense: Program Planning and Evaluation Should Follow Results Act Framework (GAO/NSIAD-99-159, August 16, 1999). Combating Terrorism: Observations on Biological Terrorism and Public Health Initiatives (GAO/T-NSIAD-99-112, March 16, 1999). Disaster Assistance: Improvement Needed in Disaster Declaration Criteria and Eligibility Assurance Procedures (GAO-01-837, August 31, 2001). Federal Emergency Management Agency: Status of Achieving Key Outcomes and Addressing Major Management Challenges (GAO-01-832, July 9, 2001). FEMA and Army Must Be Proactive in Preparing States for Emergencies (GAO-01-850, August 13, 2001). Results-Oriented Budget Practices in Federal Agencies (GAO-01-1084SP, August 2001).
Enhancing homeland security is a complex effort that involves all 50 states, the District of Columbia, and the territories; thousands of municipalities; and countless private entities. Since September 11, the nation has taken many actions to combat terrorism and enhance homeland security. It is well known that the U.S. military is conducting operations in Afghanistan. Various legislative and executive branch actions to enhance homeland security have been taken or were underway prior to and since September 11. Government and nongovernment activities are looking to the Office of Homeland Security for further guidance on how to better integrate their missions and more effectively contribute to the overarching homeland security effort. Having a common definition can help avoid duplication of effort and gaps in coverage by identifying agency roles and responsibilities. Although the agencies are looking for guidance, they also want to ensure that their unique missions are factored in as guidance is developed. At the same time, some agencies are unsure what they should be doing beyond their traditional missions. Once the national strategy is issued, federal, state, and local government agencies and private sector groups will need to work together to achieve the goals and objectives. Public-private partnerships used to address Y2K concerns can also be used to promote the national strategy.
The small size, portability and potential value of sealed radiological sources make them vulnerable to misuse, improper disposal and theft. According to IAEA, the confirmed reports of illicit trafficking in radiological materials have increased since 2002. For example, in 2004, about 60 percent of the cases involved radiological materials, some of which are considered by U.S. government and IAEA as attractive for the development of a dirty bomb. Although experts generally believe that a dirty bomb could result in a limited number of deaths, it could, however, have severe economic consequences. Depending on the type, amount, and form, the dispersed radiological material could cause radiation sickness for people nearby and produce serious economic, psychological and social disruption associated with the evacuation and subsequent cleanup of the contaminated area. Although no dirty bombs have been detonated, in the mid-1990s, Chechen separatists placed a canister containing cesium-137 in a Moscow park. While the device was not detonated and no radiological material was dispersed, the incident demonstrated that terrorists have the capability and willingness to use radiological sources as weapons of terror. A 2004 study by the National Defense University noted that the economic impact on a major populated area from a successful dirty bomb attack is likely to equal, and perhaps exceed, that of the September 11, 2001, attacks on New York City and Washington, D.C. According to another study, the economic consequences of detonating a series of dirty bombs at U.S. ports, for example, would result in an estimated $58 billion in losses to the U.S. economy. The potential impacts of a dirty bomb attack could also produce significant health consequences. In 2002, the Federation of American Scientists concluded that an americium radiological source combined with one pound of explosives would result in medical supervision and monitoring required for the entire population of an area 10 times larger than the initial blast. The consequences resulting from the improper use of radiological sources are not theoretical. Some actual incidents involving sources can provide a measure of understanding of what could happen in case of a dirty bomb attack. In 1987, an accident involving a teletherapy machine containing about 1,400 curies of cesium-137, which is generally in the form of a powder similar to talc and highly dispersible, killed four people in Brazil and injured many more. The accident and its aftermath caused about $36 million in damages to the region (Goiania) where the accident occurred, according to an official from Brazil’s Nuclear Energy Commission. In addition to the deaths and economic impact, the accident created environmental and medical problems. For example, 85 houses were significantly contaminated and 41 of these had to be evacuated. The decontamination process required the demolition of homes and other buildings and generated 3,500 cubic meters of radioactive waste. Over 8,000 persons requested monitoring for contamination in order to obtain certificates stating they were not contaminated. DOE has improved the security of hundreds of sites that contain radiological sources in more than 40 countries since the program’s inception in 2002. However, despite these achievements, such as removing dangerous sources from a waste storage facility in Chechnya, many of the high-risk and most dangerous sources remain unsecured, particularly in Russia. DOE officials told us that the program has barely “scratched the surface” in terms of securing the most dangerous sources in the former Soviet Union. Specifically, removing dangerous sources from 16 of 20 waste storage facilities across Russia and Ukraine remain unsecured while more than 700 RTGs remain operational or abandoned in Russia and are vulnerable to theft or potential misuse. In 2003, when DOE decided to broaden the program’s scope beyond the former Soviet Union, it also expanded the types of sites that required security upgrades. As a result, as of September 2006, almost 70 percent of all sites secured were medical facilities, which generally contain one radiological source. In addition, DOE’s program does not address the transportation of radiological sources from one location to another, a security measure that DOE and international officials have identified as the most vulnerable link in the radiological supply chain. DOE has experienced numerous problems and challenges implementing its program to secure radiological sources worldwide, including a lack of cooperation from host country officials. Finally, DOE has not developed an adequate plan to ensure that countries receiving security upgrades will be able to sustain them once installed. Since DOE began its program in 2002, it has taken steps to secure radiological sources in more than 40 countries and has achieved some noteworthy accomplishments. For example, DOE told us that it has (1) facilitated the removal of 5,500 curies of cobalt-60 and cesium-137 sources from a poorly protected nuclear waste repository in Chechnya, the location of continuing political unrest in southeastern Russia; (2) constructed storage facilities in Uzbekistan, Moldova, Tajikistan and Georgia so that sources can be consolidated at one site to strengthen their long-term protection; and (3) increased security at 21 sites in Greece prior to the 2004 Olympics, including providing 110 hand-held radiation detection devices for first responders. DOE secured, among other things, facilities with blood irradiators containing cesium chloride and a large industrial sterilization facility. According to DOE, it has neither the resources nor staff to comprehensively address and secure the tens of thousands of vulnerable radiological sources worldwide on its own. As a result, it has enlisted the support of regional partners and IAEA to implement programs to help other countries find, characterize and secure their most dangerous sources. DOE works with partner countries to identify sites where high- risk sources may be located and provides the equipment and training to conduct searches. Once the sources have been located, DOE enlists the support of IAEA or partner countries to transfer them to a secure facility. For example, DOE established a regional partnership with Lithuania to facilitate orphan source recovery efforts both in Lithuania and in neighboring countries. DOE purchased radiation detection equipment and trained Lithuanian specialists to initiate orphan source recovery efforts. Lithuania was able to identify 41 former Soviet military and industrial sites that potentially held high-risk radiological sources. Subsequently, Lithuania assisted DOE in initiating search and secure efforts in Estonia and Latvia, which resulted in the discovery and disposition of orphan sources. Despite these achievements, DOE’s program has not adequately addressed many high-priority sources. In 2003, the Secretary of Energy directed NNSA to expand its program to secure radiological sources worldwide, which increased both the number of countries targeted to receive DOE assistance and the types of sites to be secured. Expanding the program into many countries outside of the former Soviet Union—the initial focus and attention of DOE’s program—resulted in the addition of many medical facilities that contained lower priority sources that were now being targeted for physical security upgrades. As of September 30, 2006, DOE’s program had completed the installation of physical security upgrades at 368 sites in over 40 countries. However, a majority of sites secured do not represent the highest-risk or the most vulnerable sources. Of the total sites completed, 256—or about 70 percent—were hospitals and oncology clinics operating teletherapy machines used to provide radiation treatment to cancer patients. These machines generally contain a single cobalt-60 radiological source ranging from about 1,000 to 10,000 curies. In 38 of the 41 countries—or 93 percent—DOE had upgraded at least one hospital or oncology clinic. According to DOE, many of the countries that are included in its global program have medical facilities with radiological sources. As a result, these facilities were targeted for upgrades. In addition to the medical facilities, DOE has completed security upgrades at 47 research institutes, 35 commercial and industrial sites, and 30 waste storage facilities. Figure 1 depicts the countries receiving security upgrades, and table 1 provides a breakdown of the total number and types of facilities upgraded by DOE, as of September 30, 2006. Six national laboratory officials and security specialists responsible for implementing the program told us that although progress had been made in securing radiological sources, DOE had focused too much attention on securing medical facilities at the expense of other higher-priority sites, such as waste storage facilities and RTGs. In their view, DOE installed security upgrades at so many of these facilities primarily because the upgrades are relatively modest in scope and cost. For example, a typical suite of security upgrades at a medical facility costs between $10,000 to $20,000, depending on the size of the site, whereas the average cost to remove and replace an RTG in the Far East region of Russia is about $72,000 based on 2006 dollars. Officials from three of the four recipient countries we visited also raised concerns about DOE’s focus on securing radiological sources at so many medical facilities. For example, staff responsible for operating the teletherapy machines in hospitals in Lithuania and Poland told us that the cobalt-60 sources contained in the teletherapy machine did not pose a significant security risk. In their view, it was highly unlikely that the sources could be easily removed from these machines and that it would take more than one highly skilled and determined intruder to remove the source and transport it out of the facility without being detected or dangerously exposed to radiation. In fact, while emphasizing the importance of securing medical facilities, DOE officials stated that getting medical and security staff to buy into the need for improved security has been a consistent challenge for the program. Further, Russian officials told us that radiological sources in hospitals did not pose a comparable risk to RTGs or lost or abandoned sources. DOE has not offered to fund any security upgrades of Russian medical facilities since its funds are focused on securing RTGs, Radons, and orphan sources. According to five national laboratory officials and security specialists, completing upgrades at medical facilities also served to demonstrate rapid program progress because the upgrades are completed relatively quickly. DOE has relied upon an indicator that focuses on the number of sites that have been upgraded, or “sites secured.” While sites completed is the primary metric used by DOE, the program does compile and track several additional activities, including the amount of curies secured, countries that receive regulatory assistance, and orphan sources recovered. In measuring program performance, the Director of IRTR said that the number of sites completed demonstrated conclusively that work has been completed and represents the best available measurement. In discussions with other high-level DOE officials about the program, they consistently identified the number of sites upgraded as evidence that the program had been achieving results and reducing the threat posed by radiological sources overseas. However, PNNL and Sandia National Laboratory officials told us that the measurement used by DOE does not demonstrate how the program is reducing threats posed to U.S. national security interests. In their view, this measurement is one-dimensional and does not adequately distinguish lower-priority sites from higher-priority sites. DOE has made limited progress removing hundreds of RTGs containing high-priority sources which, according to DOE, likely represent the largest unsecured quantity of radioactivity in the world. These devices were designed to provide electric power and are suited for remote locations to power navigational facilities such as lighthouses and meteorological stations. Each has activity levels ranging from 25,000 to 250,000 curies of strontium-90—similar to the amount of strontium-90 released from the Chernobyl nuclear reactor accident in 1986. As of September 30, 2006, DOE had funded the removal of about 13 percent of all RTGs located in Russia’s inventory. Until early 2000, approximately 1,049 RTGs were in Russia. Of those, approximately 317 RTGs have been removed over the past several years, according to DOE and Russian officials. DOE funded about 40 percent of those removed (132 RTGs) and Norway, France, and Russia funded the removal of the remaining 185. However, an estimated 732 RTGs, representing several million curies of radioactivity, remain unsecured. A majority of RTGs are located along coastlines in three major regions— the Baltic, Artic and Far East. To date, DOE has focused the majority of its efforts on removing RTGs along the Arctic coast. However, more than 90 RTGs remain operational along the Baltic coast under control of the Russian Ministry of Defense, which DOE does not plan to remove. DOE officials said that the program will now focus its efforts almost exclusively in the Far East because DOE expects other countries to remove RTGs from the Baltic region. Figure 2 shows the location of the remaining RTGs in Russia, and table 2 summarizes DOE’s efforts, along with other countries, to remove RTGs in Russia. DOE officials told us that the Far East region is now a priority for RTG removal because Russian Ministry of Defense officials have specifically requested DOE’s assistance for the Far East and provided DOE with a prioritized list of RTGs to be removed. In addition, other countries have expressed a willingness to support future RTG removal in the Baltic region. For example, according to DOE, in February 2005 Denmark announced that it had reached an agreement with Russia to replace and remove all RTGs in the Baltic region. Other European nations, including Germany, have also offered assistance. However, Russian officials told us that assistance from Germany has not materialized and that Denmark had rescinded its offer to provide assistance. Moreover, these officials expressed concern regarding DOE’s decision to fund the removal of RTGs exclusively from the Far East region. In their view, the RTGs in the Baltic are more vulnerable and should be removed as soon as possible because of their accessibility and proximity to large population centers. According to DOE officials, if international funding for removal of these vulnerable RTGs does not materialize, IRTR will likely have to fund the Baltic effort. According to DOE and Russian officials, RTG removal is complex and future efforts will face a number of challenges. No comprehensive inventory of RTGs exists and, as a result, the actual number of these devices is unknown. RTGs were originally manufactured in Estonia, but the company dissolved with the collapse of the Soviet Union, and all the records were lost. The Russian organization that originally designed them is currently developing a database of known RTGs in Russia—with U.S. funding and support—to reconstruct records and develop a reliable accounting of the total number of devices produced. However, this effort has been ongoing for years and remains incomplete. Officials from the Russian organization told us that they lack confidence that the precise number and location of RTGs, both in Russia and other countries of the former Soviet Union, will ever be known. RTGs contain sources with high levels of radioactivity, and their removal requires specialized containers for their transport and adequate storage capacity to securely house them once removed. Russian officials reported that RTG removal had been slowed due to a lack of both. To address the need for containers and space, DOE has enlisted Canada’s support to provide funds to Russia for constructing an additional 17 containers for transporting RTGs, bringing the total to 36. However, this effort is not scheduled to be completed until early to mid-2007. DOE is also supporting the construction of storage facilities at two locations in the Russian Far East, Vladivostok and Kamchatka. When completed, the Vladivostok facility is expected to house 150 to 200 RTGs. Moreover, a smaller storage building is under construction at Kamchatka, which will store RTGs until they can be shipped to Vladivostok for permanent storage. According to DOE, the Vladivostok facility houses 25 RTGs that were recovered from the Russian Far East. By the end of 2006, Vladivostok is scheduled to house 33 additional recovered RTGs. Finally, Russian officials told us that future RTG removal efforts will depend on finding a viable, alternative energy source to replace power supplied by radiological sources contained in RTGs. DOE has initiated a project to provide alternative power sources, including wind and solar- powered energy panels to accelerate RTG removal. However these replacements are not always viable. For example, navigational lighthouses located in northern Russia experience severe weather and limited daylight 4 to 5 months per year and cannot rely on solar power during the winter months. Russian Ministry of Defense officials have stated that the navigational devices are critical and that they will not approve removal of any additional RTGs without a viable energy source to replace them. Figure 3 shows a navigational beacon with a solar-powered replacement energy source funded by DOE that we observed during our fieldwork. DOE also noted that RTG removal and replacement has been slowed by challenges in project negotiation with Russian officials. For example, costs of RTG removal and transport have consistently risen as a result of increased Russian price demands and the failure of the Russian government to contribute funds to the effort. DOE has also experienced long delays while waiting for the Russian Ministry of Defense to approve the release of information regarding certain RTGs. Inadequate funding to support RTG removal has extended the deadline for completion from 2014 to 2021. As an interim measure to help reduce the risk posed by RTGs that have not yet been removed, DOE has equipped a select number of RTGs with alarm systems that are remotely monitored via satellite as part of a pilot project. Specifically the alarm consists of sensors that monitor, among other things, vibrations of the device and the source’s movement. Because the source is inside the RTG, the alarms on both the device and its source emit regular, electronic signals to a regional base station. If the signals are interrupted, then the alarm is triggered. As of September 2006, DOE had funded the installation of these security systems for 24 RTGs in the Baltic region and 20 RTGs in the Far East region. According to DOE, the cost of the alarm system is about $5,000, and about $8,000 to establish the regional base station. DOE officials said they will continue to install security upgrades to RTGs as an interim measure, as long as the costs remain at those levels. In addition to RTGs, DOE also has made limited progress securing radiological sources stored at waste storage facilities in Russia and Ukraine. DOE has determined that the storage facilities in Russia and Ukraine are the most vulnerable in the world and pose a significant risk, due to the very large quantities of radioactive sources currently housed at each site. According to DOE, waste storage facilities can store up to 3 million curies of radioactive waste. However, upgrades at a majority of these facilities throughout the former Soviet Union, particularly in Russia and Ukraine, remain incomplete. To date, upgrades at 4 of 15 Radons in Russia have been completed since DOE began work in 2002. According to DOE, upgrades are under way at seven additional Radons. However, work has been delayed at several of these facilities. According to DOE, delays in upgrades to Radons were due in large part to delays in the Russian certification process of physical equipment for upgrades at these types of facilities. In addition, reorganization and managerial changes at the primary Russian agency with oversight authority over construction at Radon facilities presented challenges for DOE officials trying to gain access to Radons for physical security assessments. Furthermore, DOE officials noted that progress has been slowed because several Radon managers were unwilling to participate in the program until they received assurances from DOE that their Radon would receive a level of funding comparable to larger Radons. DOE has not completed upgrades at any of Ukraine’s five Radon sites, one of which contains all 13 RTGs recovered in Ukraine. According to DOE officials, initiating work at the Radons has been problematic because Ukrainian officials have designated some sites as “sensitive” and thus denied DOE access to them. As a result, security upgrades have been delayed for at least 2 years. In May 2005, Ukraine agreed to provide DOE access to two of the five sites, and security upgrades at those facilities are under way. DOE plans to complete the remaining three Radons by 2010 but have found that Ukraine is impeding access to these additional sites. In addition, DOE has identified 49 vulnerable waste storage facilities worldwide for assistance and has completed work at 26 of these sites in several countries, including Armenia, Azerbaijan, Belarus, Estonia, Georgia, Kazakhstan, Kyrgyzstan, and Lithuania. DOE is also undertaking upgrades at 23 additional sites. However, DOE has not addressed sites in the following countries: Albania, Argentina, Bangladesh, Bolivia, Brazil, Ecuador, El Salvador, Ethiopia, Jordan, Libya, Peru, Serbia, and South Africa. It was unclear, based on our discussions with DOE officials, when, if ever, security upgrades would be completed in these countries. Although IAEA officials told us that transportation of high-risk radiological sources is the most vulnerable part of the nuclear and radiological supply chain, DOE determined that source transport is generally outside the scope of the program. Some DOE officials have expressed concern about the lack of security during the transport of radiological sources and questioned whether transportation should be a component of DOE’s program. For example, a May 2005 DOE analysis concluded that DOE was addressing transportation security on an ad-hoc basis, and the existing method of providing transportation security had serious limitations. The analysis also noted that DOE’s current approach is resource limited and lacked a commitment to integrate transport security into all countries participating in the program. According to DOE’s 2003 program guidelines, DOE will fund transportation security upgrades only in Russia and Uzbekistan because the United States had international agreements with these countries to provide liability coverage when transporting radiological sources. As a result, DOE security specialists were not pursuing transportation security-related projects with the majority of countries participating in the program. However, DOE noted that its national laboratories were working with the U.S. Department of Transportation, IAEA, and key IAEA donor states to strengthen transportation security regulations and procedures to reduce the risks of theft or diversion of nuclear and other radioactive materials in transit. In every country we visited, host country officials identified the transportation of sources as a critical vulnerability and a priority for security upgrades. Moscow Radon officials told us that transportation security had emerged as one of their top priorities. DOE has, in fact, provided a fleet of transport vehicles for the Moscow Radon, including guard vehicles, escort vehicles, and cargo trucks for transporting both liquid and solid waste. However, Radon officials told us that they also needed a reliable communication system to ensure the security of sources in transit. Consequently, the Moscow Radon funded a satellite-linked cell phone to facilitate communication and to monitor vehicles that transport radiological sources. However, at another Radon site we visited in Russia, a similar communications system did not exist. Moreover, officials from this site told us that their fleet of transportation vehicles was about 30 years old and needed to be replaced. These officials stated that they requested funds from DOE for the vehicle replacement but were told that no funds were available. Another aspect of transportation security concerns equipment containing small, easily transportable sources—typically weighing less than 25 pounds with an average radioactivity level of several curies. DOE estimates that about 10,000 of these smaller sources exist in several different countries. Specifically these sources, such as americium and beryllium, are used in the oil and gas industry for exploration purposes. According to DOE, these sources routinely move from one base camp to another with limited security, making them vulnerable to theft and potential misuse. We saw first-hand how vulnerable these sources were during our visit to one industrial facility where we observed a truck used to transport a cesium-137 source to a remote gas exploration site. Host country officials showed us how easy it would be to remove the sources from the truck as they were being secured with a simple lock. In addition, country officials told us that although some trucks are equipped with mobile phones, many areas along transportation routes are remote, and the phones often have no signal. Figure 4 shows an unsecured truck used to transport radiological sources. DOE has taken some steps to address this problem, but agency officials said that securing mobile sources is too costly and should be the responsibility of private industry. In this regard, DOE initiated efforts with U.S. industry partners to identify better ways to secure sources that have industrial applications and are frequently in transit. In February 2006, DOE attended a forum with NRC and the Society for Petroleum Engineers to discuss security issues and develop best practices within the industry to better control radiological sources used overseas for industrial purposes. In September 2006, as part of the broader reorganization of its Global Threat Reduction Initiative, DOE established new guidance for selecting sites to receive physical security upgrades. Under the new guidance, DOE has combined its radiological and nuclear material security efforts to develop a single threat reduction strategy. This integrated strategy prioritizes security efforts, based most importantly on the attractiveness of the different types of radiological and nuclear material and (1) their proximity to U.S. strategic interests, such as military bases overseas or commercial ports; (2) external threat environment within the country; and (3) internal site vulnerability, which measures existing physical protection on site. This new criteria also increased the level of the design basis threat required to secure each type of material. For example, sources having a curie level exceeding 1,000 could have the same priority for security upgrades as certain amounts of plutonium or highly enriched uranium. As a result, RTG security remains a high priority, while in DOE’s view, some medical radiological sources could also be considered a high priority. However, when we asked DOE officials in September 2006 about the relative priority of medical sites, they said all of the sites that were upgraded under the old guidance would still be considered high priority under the new criteria. DOE’s previous guidance, developed in 2003, based site selection on a minimum threshold level—measured in curies—of radiological sources present at a particular location. In addition, the guidance factored in other conditions such as the location of the site, the security conditions of the site, and evidence of illicit trafficking in the country. According to DOE, in a presentation made to us in September 2006, this guidance gave equal treatment to all sites within countries receiving security upgrades. This guidance did not clearly discriminate between the different types of sites secured and whether they were considered to be the highest priority. For example, securing a waste storage facility, which can contain up to three million curies, was given the same weight as securing an oncology clinic with one source containing 1,000 curies. Security measures recommended for radiological sources were based on a threat scenario of one outsider penetrating the facility, equipped with a handgun while working with one complicit insider. However, the new guidance significantly increases the threat by advancing a more intense scenario, including six outsiders with automatic weapons and 10 kilograms of explosives working with one complicit insider. As a result, DOE officials said that future upgrades to secure radiological sources will have to be strengthened to meet the new protection levels. Additional enhancements at some sites are now being considered to address a more robust design- basis scenario. DOE experienced numerous problems and challenges during program implementation that impeded its efforts to secure radiological sources. As a result, some projects were delayed, and in some extreme cases, DOE was unable to implement its program at all. DOE said it was limited in its ability to enhance physical protection in several countries because IRTR is a voluntary program. For example, high-risk countries such as Nigeria and Turkey were unwilling to cooperate to implement security upgrades. In addition, Mexico declined DOE upgrades, although DOE had identified several vulnerable sites. While Mexico has continued to decline physical security assistance, Mexican officials have since agreed to accept regulatory infrastructure development assistance. In targeting countries to receive assistance, DOE developed a prioritization model that ranked countries as high, medium, and low risk. To date, DOE has initiated work in 49 of the countries identified as priorities for assistance. Our analysis showed that DOE attempted to initiate efforts to secure radiological sources in 31 high-priority, 17 medium-priority and one low-priority country. Consequently, about 40 percent of countries receiving assistance do not represent the highest- priority countries. According to DOE officials, medium and low-priority countries-—more than one-third of the total in DOE’s program—were selected because these countries had expressed a willingness to receive assistance. We found a variety of problems and challenges that affected DOE’s ability to implement its program in several of the countries targeted for assistance. These included, among other things, problems with foreign contractor performance and lack of adequate physical infrastructure to support security upgrades. DOE officials said that various combinations of these and other impediments resulted in delays implementing security upgrades in about 75 percent of all countries participating in the program. DOE also stated that many of these problems were identified and corrected during quality assurance visits by DOE inspection teams. Contractor performance emerged as a key challenge. Six DOE officials told us that contractor performance and selection of reputable, reliable in- country contractors was critical to successful project implementation. DOE asserted that it has to maintain flexibility in selecting foreign contractors because most of the countries do not follow normal Western business practices. In DOE’s view, problems arising from contractor performance resulted from “security culture” and language barriers, which caused miscommunication. Some problems we found with reliable in- country contractors included the following: In Bulgaria, a contractor installed steel security doors—which protected radiological sources—with the hinges on the outside of the door. As a result, a potential transgressor could have unhinged the door and accessed the sources; In Kazakhstan, a contractor provided security manuals and procedures for newly installed equipment in English instead of the native language. As a result, DOE officials found that the hospital staff had not changed the security codes and were not well versed in proper security procedures; and In Georgia, hospital staff told us that the contractor did not train them on operating the alarm systems. DOE did, however, report working with competent contractors in Poland, Lithuania, and Egypt that resulted in timely project implementation. DOE project managers for these countries told us that contractors conducted adequate training and followed up with security upgrades maintenance. Several DOE officials told us that implementing security upgrades also presented challenges due to inadequate physical infrastructure. In these countries, the types of challenges included lack of reliable electricity, a backup power source, and telecommunications at sites containing radiological sources. For example, in both Nicaragua and Tanzania, DOE officials said that frequent power outages diminished the detection capability of security alarms installed and that neither country had a backup source of power to operate the security alarms and security lighting provided by DOE. DOE has not developed an adequate comprehensive strategy to better ensure that physical security upgrades that have been installed, and the security training that has been provided, will be effectively sustained over the long term. DOE’s current guidance states that DOE will sustain upgrades by providing countries with a 3-year warranty on newly installed security equipment and preventative maintenance contracts, as well as providing training on newly installed equipment for operational staff at the sites. However, DOE has not formulated a long-term sustainability plan that identifies expected completion dates for each country, including an exit strategy, and approaches for sustaining upgrades, including how host countries will financially continue maintenance of upgrades following DOE warranty expiration. In fact, a senior DOE official told us that responsibility for drafting and implementing long-term sustainability should be that of the host country. Furthermore, DOE has not adequately addressed the lack of regulatory infrastructure to provide oversight of source security in a majority of countries to receive DOE assistance. DOE officials responsible for program implementation said that they were uncertain that security upgrades installed would be sustained by countries once DOE assistance was no longer available. In fact, our analysis showed that these officials had confidence that the security upgrades would be sustained in only 25 percent of the countries. Specifically, officials pointed out that countries, such as Bangladesh or Tajikistan, would be unlikely to sustain upgrades because they do not have the resources to maintain the equipment and have not identified or allocated funding to maintain them beyond the 3-year warranty period. In addition, several host-country officials with whom we met expressed similar concerns. For example, hospital administrators in three countries told us that hospital budgets were already strained and that they could not be certain that funding would be available once the warranties expired. Moreover, hospital administrators told us it was difficult to estimate the level of resources needed to sustain the upgrades because DOE had not provided them with future maintenance costs. Several sites that received DOE upgrades have already experienced maintenance problems. For example, in Georgia, we found that a storage facility containing RTGs and a seed irradiator—which has thousands of curies of a cesium-137 source—had several large openings in the roof. When we asked host government officials about the cause of the openings, they stated that a recent storm had shifted the metal sheets covering the storage facility’s roof. The officials did not state when the roof would be fixed or how funds would be allocated for the repair. In addition, we found that surveillance monitors were not being used at a medical facility. In fact, according to the hospital staff, the monitors, which were not broken, had been turned off for several days. In Lithuania we visited an oncology clinic and observed that the security cable, used to secure a teletherapy machine’s cobalt-60 source, had been broken for almost a month. According to a DOE physical protection specialist, the cable was the most important security feature because it triggered an alarm directly connected to the teletherapy machine’s “head,” which contains the radiological source. According to DOE, this was subsequently corrected as part of program assurance procedures. In addition, in Poland, we visited a research facility containing a 22,000 curie irradiator. We observed that the motion detection device in the room housing the irradiator was not working because of the high level of radioactivity present. According to the in-country contractor, the device had been disabled at least three times since the equipment was installed about a year earlier. Figure 5 shows the temporary storage facility with large openings in the roof, and figure 6 shows the broken cable at the oncology clinic. In addition to maintenance problems, we also found that a lack of adequate training on newly installed equipment further raised questions about the long-term success of the program. According to the hospital staff at a facility in Georgia, they had not received adequate training from the in-country contractor on how to operate the installed alarm systems. We found similar problems in other countries we visited. For example, at some of the hospitals, security codes allowing entry into rooms where sources were located had not been changed on a regular basis. Also, at one medical site, more than 50 staff had access to the security code for a room storing a radiological source of about 1,250 curies. A DOE physical security specialist reported that the security code had not been changed from the default settings in at least three FSU countries. Furthermore, this specialist noted that staff in charge of protecting the equipment had copied security access codes onto checklists that were readily accessible to unauthorized staff in about 15 countries. According to DOE, another key element of sustaining security of sources is having an organized, competent guard force. In general, the guard force serves as a critical communications link between the facility staff and the response force. We found that several of the 49 countries did not possess adequate guard or response forces, and in several cases, the guard forces in these countries were untrained and unarmed. Specifically, at one site that DOE upgraded, the guard with whom we spoke was unarmed and had no viable form of communication in the case of an emergency. At the same site, the guard told us that he shared responsibility for site security with an individual who served as a guard on a part-time basis in exchange for being able to live at the site. Moreover, we found that the absence of a reliable source of electricity made it difficult to complete and ensure the sustainability of alarms and motion detection devices in some of the countries receiving upgrades. For example, both Ecuador and El Salvador have limited telephone line access. As a result, according to DOE, the local guard forces could not be contacted immediately after an alarm was triggered at a site containing radiological sources. Consequently, security alarms installed in lesser developed countries may have marginal long- term impact. At some of the facilities we visited, there appeared to be a well trained guard force equipped with flashlights, radios, walkie-talkies, or cell phones. However, we also found that even at locations where improved security systems were in place, only a single guard was present and had no reliable method of contacting a response force. In these types of situations, according to DOE, the site is very vulnerable to theft. At one facility in Lithuania, we were told that the police were located about 30 minutes from the site. At that facility, we observed that the guards were not equipped with guns, and officials were not sure they were always present. However, DOE did fund remote monitoring equipment, which allowed the local police force to view the site 24 hours per day from the police station. According to IAEA experts and at least five DOE and NRC senior level officials, a strong and independent nuclear regulatory authority that is able to provide effective radiological source oversight is critical to program sustainability. A key function of a nuclear regulatory body is to establish procedures for the control of radiological sources, including the development of a basic registry of sources. The absence of reliable registries in many countries impeded DOE’s ability to identify a comprehensive list of sites to upgrade. Also, the absence of such a list complicates DOE’s ability to determine when it has completed its program in a particular country. More specifically, DOE physical security specialists told us that sources that had been identified and inventoried at various hospitals were subsequently moved to another location within the facility or are no longer being used. Consequently, some of the upgrades that DOE installed had limited security impact, or DOE has had to fund additional upgrades for the same source. We previously reported that DOE was focusing its source security program too narrowly on physical security upgrades and not taking into account respective countries’ long-term needs to develop better nuclear regulatory infrastructures. DOE recognized the critical role of regulatory infrastructure development midway through the program and subsequently added a small regulatory infrastructure development component that is designed to support the creation and strengthening of effective and sustainable national regulatory infrastructures. DOE officials told us that the department’s regulatory infrastructure development efforts are meant to complement the more comprehensive efforts of IAEA. In 1994, IAEA established a “model project” program to enhance countries’ regulatory capacity, and the program was available to any member state upon request. IAEA continues to provide a variety of regulatory infrastructure support services and training to both member and nonmember states to support radiological source security and safety. The director of the IRTR program said that the long-term impact of DOE’s program would likely have been enhanced had there been a stronger regulatory infrastructure in place to support the recommended security upgrades efforts in many of the countries. However, many countries participating in the IRTR program—specifically lesser developed countries—lack an independent regulator. According to IAEA, as many as 110 countries worldwide lacked the regulatory infrastructure to adequately protect or control sealed sources as of 2003. As of August 31, 2006, DOE spent approximately $108 million to implement the IRTR program. This money was spent to, among other things, conduct vulnerability assessments at a variety of sites containing radiological sources and to install physical security upgrades at these sites, such as hardened windows and doors, motion sensors and surveillance cameras. Russia received almost one-third of total DOE funding—about $33 million—which focused primarily on orphan source recovery, RTG removal and disposal and physical security upgrades at waste storage facilities. However, one-fourth of total expenditures—about $26.5 million—paid for program planning activities such as development of program guidance documents, hiring private consultants, and conducting studies. The program has also carried over large balances of unspent, unobligated funds each fiscal year since its inception in 2002, because of, among other things, large supplemental appropriations at the onset of the program and systemic delays in project implementation. DOE officials told us that securing radiological sources in other countries is a lower priority than securing more dangerous nuclear materials, such as plutonium and highly enriched uranium. As a result, DOE reduced funding for radiological security activities and future funding for the program is uncertain. DOE program officials are concerned that DOE may be unable to meet outstanding contractual commitments to maintain the more than $40 million in upgrades already installed. As of August 31, 2006, DOE had spent about $108 million to implement the IRTR program. A majority of this money—$68 million—was spent to (1) physically secure sites containing radiological sources; (2) locate, recover, and dispose of lost or abandoned sources; and (3) help countries draft laws and regulations to increase security and accounting of sources. In addition, DOE provided $13.5 million to IAEA to support activities to strengthen controls over radiological sources in IAEA member states. However, one-fourth of the total budget—about $26.5 million—was spent on program planning activities not directly attributed to a specific country, such as hiring private consultants, and building a database for international law enforcement officials. Table 3 provides a breakdown of DOE program expenditures. Physical security upgrades to secure sites containing radiological sources accounted for the largest program expenditure—almost $43 million. The majority of DOE-funded upgrades were at hospitals and oncology clinics. DOE also funded upgrades at other types of facilities that utilize or store radiological sources and materials, including waste storage facilities, commercial and industrial facilities, and other research institutes. While DOE estimates that costs for each facility type range from $15,000 to secure a medical facility to $50,000 to secure a waste storage facility, actual expenditures for securing sites varied based on factors such as regional labor rates, conditions of existing infrastructure, and remoteness of location. DOE officials stated that cost estimates of upgrade projects included vulnerability assessments, equipment costs and installation, and warranty contracts covering equipment maintenance for three years. DOE physical security specialists conducted vulnerability assessments to identify security weaknesses at facilities, including adequacy of the local guard force, exposed windows and doors, and access to sources. In some instances, mostly at lower-risk sites, DOE authorized contractors responsible for equipment installation to conduct these assessments with direction from DOE. The contractors provided DOE with reports and photographs that summarized findings and proposed recommended upgrades. Types of upgrades installed varied based on assessment findings and host country laws and policies, but standard equipment packages consisted mostly of hardened windows and doors; motion sensors and alarms; access control systems, such as coded keypads or swipe card entry; security cameras; and video monitoring. At some sites, DOE also provided guard forces with enhanced communication equipment, including radios and mobile panic buttons that send emergency signals to local police or security companies. Installation costs also included training for on-site personnel that would be responsible for operating the equipment. Costs of physical security upgrades also included 3-year warranty contracts that cover maintenance costs, such as the cost of remote monitoring and spare parts. DOE officials told us that contracts are negotiated with contractors responsible for equipment installation and require that countries receiving assistance assume the costs of sustaining the equipment no later than three years after the upgrades have been installed. For the duration of the warranty period, DOE estimated that, on average, it would cost $40,000 per country, per year to maintain equipment. This estimate includes sending one DOE team per country, per year to conduct assurance visits, any equipment contractors have to replace, and costs of remote monitoring systems. DOE also spent $23 million to provide countries with radiation detection equipment and training to locate and recover lost or abandoned radiological sources and secure them in interim or permanent storage facilities. DOE has two programs to support orphan source recovery efforts—the Russian Orphan Source Recovery program, which is focused solely in Russia, and the Global Search and Secure Program, which includes search and recovery efforts in other countries receiving DOE assistance. More than 80 percent of orphan source recovery expenditures were spent in Russia—about $19 million. To support GSSP, DOE spent $4 million in 11 countries—Azerbaijan, Croatia, Estonia, Indonesia, Kazakhstan, Kyrgyzstan, Latvia, Philippines, Romania, Tajikistan, and Tanzania. These funds were spent primarily to provide countries with (1) standard packages of equipment such as hand-held radiation detection monitors and characterization instruments to properly identify recovered sources; (2) training workshops on the appropriate use of the equipment; and (3) physical security upgrades at some facilities storing recovered or disposed sources. In addition, DOE spent about $2 million in ten countries (Bulgaria, Colombia, Indonesia, Iraq, Kazakhstan, Mexico, Moldova, Philippines, Thailand, and Vietnam) to help develop national standards and regulations for the control and accounting of radiological sources. A majority of these funds were spent in the United States—$1.8 million—to develop a set of security-based regulations to be utilized by countries with limited resources and inadequate radiological source inventories. Once countries drafted an initial set of regulations, DOE experts reviewed drafts and provided feedback and proposals for improvement. DOE also provided training workshops and seminars on appropriate regulatory inspection practices for radiological source controls and accounting. In particular, DOE has been working with regional partners, such as the Australian Nuclear Science and Technology Organization (ANSTO), to implement many of its regulatory development activities. For example, DOE and ANSTO have conducted regulatory development training workshops for countries located in East Asia and the Pacific region. DOE also provided about $13.5 million to IAEA’s Nuclear Security Fund to support efforts to strengthen controls over sources with IAEA member states, including technical training on fundamental principles and objectives of radiological source security. IAEA established the fund, which consists of voluntary budget contributions from other countries, after the terrorist attacks of September 11, 2001. The fund is designed to improve nuclear security in IAEA member states by helping countries to protect their nuclear and radiological materials and facilities. Specifically, DOE funded IAEA missions that carried out safety and security assessments at sites identified by member states containing vulnerable radiological sources. Additionally, DOE contributions to IAEA supported training conferences and other advisory services. DOE funds also enabled IAEA to transport several high-risk sources to secure storage facilities and provide conditioning equipment to prepare recovered sources for disposal. Finally, DOE spent one-fourth of total program expenditures—about $26.5 million—on activities not directly attributed to a specific country. Specifically, these costs included, among other things, program planning activities such as the development of program guidance documents. For example, DOE hired an outside contractor to conduct a review of the radiological source security program and to help DOE develop a plan to guide future efforts. The contractor spent several months interviewing agency officials and program staff to assess the strengths and weaknesses of the program and the level of DOE coordination with State, NRC, and IAEA. The final report provided recommendations to improve coordination with other U.S. agencies and within DOE. In addition, DOE spent $1.5 million of these funds to facilitate an information exchange with Interpol, an international agency that coordinates the law enforcement activities of the national police bureaus in each of its member states, in order to obtain information about international arrests involving theft or smuggling of radiological materials. DOE’s intent was to provide Interpol the capacity to contribute law-enforcement data into DOE’s database, which contains country-specific information regarding, among other things, criminal activity. Funds provided to Interpol paid for computers and software and the salaries for two staff located at Interpol headquarters in Lyon, France, to set up and operate the database for two years. A DOE program manager expressed concern about whether providing funds to Interpol would provide tangible results or increase the effectiveness of the radiological sources program. This program manager questioned whether the Interpol project contributed to the program’s core objectives of securing the highest risk, highest priority sources in other countries. A senior DOE official told us that these funds—identified by DOE as strategic development and program integration funds—were established at the onset of the program and were intended to carry out activities not directly related to country-specific physical security upgrade projects and initiatives. This official added that in the early stages of the program, expenditures of this type focused primarily on strategic planning, developing program technical documents and processes, conducting studies, and developing a database of regional country information to support program objectives. While DOE assistance was spread among 49 countries, Russia received the largest amount, $33 million, nearly one-third of total program expenditures. DOE’s cost manager for the IRTR program reported that expenditures in Russia supported three primary program components: (1) orphan source recovery efforts ($18.5 million); (2) RTG removal and disposal, including alternative energy source development ($7 million); and (3) physical security upgrade projects, including waste repository sites ($7.5 million). The 13 other FSU countries received a total of about $11 million, with Ukraine being the largest recipient, receiving about $3.5 million. In addition, about 65 percent of DOE expenditures in FSU countries was spent in these countries for services, equipment, and materials that were used to improve physical security. By comparison, DOE spent significantly less outside the FSU, and expenditures in these countries were both modest by comparison and disproportionately spent in the United States by DOE’s national laboratories for labor, travel, equipment and overhead costs. For example, the 35 non-FSU countries received a total of about $17 million, or just 28 percent of total country-specific expenditures. Two-thirds of funds spent for non-FSU countries were spent in the United States. Furthermore, five countries in Africa received no in-country expenditures. Although many countries in Africa have been defined as high-risk by DOE, countries in this region received a total of about $1.3 million, about two- thirds the amount spent in one European country—Poland. While expenditures in South America were more evenly divided between in- country costs and funds spent in the United States, the region received only about $3.5 million spread among 12 countries. Figure 7 provides a regional breakout of these expenditures. Additionally, see appendix II for more details about regional and individual country expenditures for fiscal years 2002 through 2006. As of August 31, 2006, DOE had carried over almost $23 million in unspent or unobligated funds for the IRTR program from previous years. Moreover, the program consistently carried over a substantial uncosted balance each fiscal year throughout the life of the program. For example, for fiscal years 2003 through 2005, the program carried over uncosted funds totaling $27.4 million, $34.1 million, and $22.4 million, respectively. According to the program’s director, a majority of carryover balances were due to, among other things, large supplemental appropriations at the onset of the program and delays in implementing security upgrade projects. As we reported in 2004, large carryover balances are not uncommon in DOE nuclear nonproliferation programs—especially in Russia—because of, among other things, difficulties in negotiating and executing contracts and the multiyear nature of programs. Table 4 shows DOE total budget and uncosted balances for fiscal years 2002 through 2006. DOE has significantly decreased IRTR program funding since 2003, and DOE officials expect further reductions over the next several years. Specifically, DOE’s internal budget allotments for the IRTR program have gone from a high of $38 million in fiscal year 2003 to $24 million in fiscal year 2006. According to a senior DOE official, priorities within GTRI, which funds DOE’s nuclear and radiological threat reduction efforts, have shifted, and future funding will be redirected to, among other things, securing special nuclear material, such as plutonium and highly enriched uranium (HEU). In particular, DOE has assigned the highest budget priority to three specific GTRI elements that address the threats posed by an attack using an improvised nuclear device: the (1) Reduced Enrichment for Research and Test Reactors program, (2) Russia Research Reactor Fuel Return program, and (3) Foreign Research Reactor Spent Nuclear Fuel program. The goal of the Reduced Enrichment for Research and Test Reactors program is to get research reactors around the world to convert from HEU to low enriched uranium with conversion of all U.S. civilian research reactors to be completed by 2014. The Russia Research Reactor Fuel Return and Foreign Research Reactor Spent Nuclear Fuel programs are designed specifically for returning HEU to the United States or Russia and are expected to be completed by 2013 and 2019, respectively. In contrast, other GTRI elements, including the IRTR program, do not have presidential commitment dates for completion and, as a result, are lower priorities for funding. DOE’s Principal Assistant Deputy Administrator for Defense Nuclear Nonproliferation, told us that DOE initially placed a high priority on securing radiological material and the Secretary of Energy made a personal commitment to this activity. More recently, because of budget reductions affecting the entire agency, DOE has had to review and evaluate program priorities. This official noted that while the likelihood of a dirty bomb attack is much greater than a nuclear attack, the consequences in terms of loss of life and the overall catastrophic impact of the latter would be much greater. He also noted that, if given a choice, he would place more emphasis on securing radiological sources in the United States than in other countries. In his view, there is still a significant amount of work to be done to secure radiological sources in the United States. Future anticipated reductions in funding for the IRTR program will have significant implications for the amount of sources that can be secured in other countries. DOE’s initial target for program completion was to secure 1,500 high-priority sites in 100 countries by 2014. This goal assumed that the program would receive $25 million per year over the life of the program. DOE officials told us that currently projected budget reductions may jeopardize the program’s ability to fund even the existing warranty contracts applied to physical security upgrades already installed. Moreover, DOE has not determined the extent to which the program will fund warranties for future upgrade projects meaning countries will need to assume greater financial responsibility for sustaining upgrades. However, DOE officials who are responsible for project implementation told us they lacked confidence that a majority of countries would be able to maintain upgrades without further DOE assistance, mostly because many recipients do not have adequate resources. For example, DOE officials responsible for project implementation said that neither Ukraine nor Tajikistan, where DOE has spent a total of about $3.5 million, has identified resources for radiological source security once DOE warranties expire. In addition, DOE has not fully addressed the cost implications of the increased levels of physical security required by the new design basis threat assigned to radiological sources under GTRI’s reorganization. Although DOE’s new program guidance says that the radiological security upgrades strategy will continue to focus on inherently sustainable, low- cost upgrades, it specifically states that the revised threat scenario significantly increases the threat that physical security upgrades must withstand. As a result, the new guidance states that upgrades will need to be significantly enhanced to meet the new threat level. DOE officials have raised concerns regarding DOE’s ability to sustain low-cost upgrades already installed. In light of the program’s ongoing budget reductions, the new guidance raises further concern regarding DOE’s ability to sustain the increased cost of enhanced upgrades for future projects. To offset anticipated shortfalls in funding, DOE plans to seek international contributions to secure radiological sources in other countries. DOE officials said that several countries, including, Canada, Japan and Norway, have inquired about contributing funds directly to GTRI but that, until recently, DOE had no authority to accept direct financial support from international partners for GTRI activities or to use funds received outside of the normal appropriations process. In October 2006, Congress authorized DOE to enter into agreements, with the concurrence of State, to receive contributions from foreign countries and international organizations for IRTR and other GTRI programs, and to use those contributed funds without fiscal year limitation. Additionally, Russian officials told us that because of the importance of the IRTR program, they are interested in providing increased financial commitments to secure radiological sources. In particular, the Deputy Head of the Russian Radon waste storage facilities, known officially as the Federal Agency for Construction and Utilities, told us that the organization would be willing to make a sizeable contribution to Radon upgrades. DOE officials stated that international source security is not the sole responsibility of the United States government and that increased foreign cooperation will be necessary to complete program objectives. DOE has improved coordination with State and NRC to secure radiological sources worldwide. Since we reported on this matter in 2003, DOE has involved State and NRC in its international radiological threat reduction activities more often and has increased information-sharing with the agencies. However, DOE has not always integrated its efforts efficiently and coordinated efforts among the agencies have been inconsistent. Moreover, DOE has not adequately coordinated the activities of multiple programs within the agency responsible for securing radiological and nuclear materials in other countries and, at times, this has resulted in conflicting or overlapping efforts. DOE has improved coordination with IAEA to strengthen controls over other countries’ radiological sources and has developed bilateral and multilateral partnerships with IAEA member states to improve their regulatory infrastructures. DOE funding to IAEA has supported, among other things, IAEA missions to assess the safety and security of sites containing radiological sources and IAEA-sponsored training programs and regional workshops focusing on radiological source security. However, significant gaps in information-sharing between DOE and IAEA, and with the European Commission, have impeded DOE’s ability to target the most vulnerable sites for security improvements and to avoid possible duplication of efforts. In recent years, DOE has improved coordination with State and NRC and has taken steps to work more collaboratively with U.S. agencies to secure radiological sources in other countries. An example of improved U.S. coordination is the interagency effort to establish a radiological source regulatory infrastructure in Iraq. Since 2003, with the support of DOE and NRC, State has led the effort to establish the Iraq Radioactive Source Regulatory Authority (IRSRA) and develop a radiological regulatory infrastructure in Iraq. State and DOE provided IRSRA with equipment, training, technical assistance, and funding to help the new agency assume increased responsibility for establishing radiological source regulations and procedures consistent with international standards. Specifically, with funding and logistical support from DOE, State coordinated several meetings in Amman, Jordan, in 2004 and 2005 to provide IRSRA personnel training by IAEA staff. These meetings resulted in the development of new Iraqi laws and regulations for the regulation, transport, import and export of radiological sources, including physical security requirements. DOE experts reviewed draft Iraqi laws and regulations for their relevance to the security of radiological sources, and NRC provided guidance for developing import and export controls for radiological sources. State also funded procurement of mobile radiation detection equipment so that Iraqi regulatory personnel can survey various cities to search for orphaned radiological sources. This equipment, provided by DOD’s Defense Threat Reduction Agency, included radiological handling, measurement, and protective equipment, such as radiation meters, respirators, and protective clothing. Hand-held radiation equipment from DOE has also been transferred to Iraqi agencies for border monitoring. DOE experts also trained IRSRA officials and personnel on how to conduct vulnerability assessments. Finally, to financially support IRSRA’s efforts, State provided a portion of $1.25 million in funding from its Nonproliferation and Disarmament Fund (NDF), to IAEA for training and other assistance to IRSRA, including an IAEA review of Iraq’s draft laws and regulations. State also used a portion of this funding to purchase a specially equipped vehicle that can be driven through neighborhoods to detect unsecured radiological sources. DOE and State officials told us that although the Iraq project is a unique circumstance, it is an example of improved U.S. government coordination to strengthen controls over radiological sources and could provide a model for future efforts. Although coordination among the agencies has improved, these efforts have been inconsistent and there is no comprehensive governmentwide approach to securing radiological sources overseas. We reported in 2003 that DOE’s efforts to secure sources in other countries had not been well coordinated with those of other U.S. agencies. Specifically, DOE had not fully coordinated with State and NRC to leverage program resources, maximize available expertise, avoid potential duplication of efforts, and help ensure the program’s long-term success. We also recommended that DOE take the lead in developing a comprehensive governmentwide plan to strengthen controls over sources in other countries. In response to our report, DOE hired a consultant to determine, among other things, whether gaps exist in agency program activities with respect to securing radiological sources worldwide and what role and responsibilities DOE should assume in coordinating U.S. government efforts. In December 2004, the consultant reported that although DOE had addressed many of its issues with State and NRC, more effective coordination was needed. Moreover, the consultant stated that the lack of effective coordination among these agencies posed the greatest potential for conflict, as a result of differing mandates and conflicting philosophical approaches to radiological source security. Specifically, effective and systematic coordination between U.S. agencies has been impeded at times because individual agency missions differ and, as a result, agency efforts have been, at times, at odds with one another. For example, the consultant reported that NRC had expressed concern that DOE’s regulatory infrastructure development activities infringed on a decades-long NRC function. Furthermore, DOE is primarily concerned with security of sources while NRC has traditionally focused more on safety issues related to the use of sources. The report also concluded that the debate between DOE and NRC over the importance of the safety versus the security of radiological sources had negatively impacted effective coordination between the two agencies. DOE, State, and NRC have differed on, among other things, funding and implementation of regulatory infrastructure development activities in other countries. For example, in May 2003, NRC’s Office of International Programs sought $5 million in appropriated funds to assist its regulatory counterparts in the FSU and countries of central and eastern Europe to enhance (1) existing laws, rules, and regulations governing use of radiological sources; (2) mechanisms used to track radiological sources, such as databases and registries; and (3) day-to-day regulatory oversight of sources. NRC stated in its request that DOE’s physical security enhancements would not likely be sustained in the medium to long-term absent clear, enforceable regulatory requirements. Moreover, NRC sought to assist DOE by providing assistance to regulatory authorities in the FSU, where a majority of DOE’s efforts were focused at the time. NRC officials noted that the biggest challenge they have faced has been identifying adequate, reliable, and predictable funding to support international assistance activities. NRC, unlike other U.S. government agencies, has largely relied upon other agencies—Departments of State, Energy and Defense—to support its international programs and is required by law to recover about 90 percent of its annual budget authority through licensing and inspection fees assessed on the U.S. nuclear industry. Furthermore, the U.S. nuclear industry has raised concerns about using NRC funds to support international assistance. Despite these funding limitations, NRC has a long history of supporting regulatory strengthening efforts in the countries of central and eastern Europe and the FSU. These efforts have included training other countries’ regulators in all aspects of licensing and inspection procedures and developing a control and accounting system for nuclear materials. In July 2003, the Senate Appropriations Committee directed that $5 million out of certain amounts appropriated to NNSA be made available to NRC for bilateral and international efforts to strengthen regulatory controls over radioactive sources that are at the greatest risk of being used in a dirty bomb attack. In September 2003, according to the Director of the NRC Office of International Programs, NRC and the Director of DOE’s International Materials Protection, Control and Cooperation program reached an initial agreement in principle, whereby DOE would provide NRC with $1 million per year for 5 years to conduct regulatory activities in countries outside of Russia. According to DOE officials, the funds were never transferred because the Senate withdrew the direction to allocate the funds to NRC during conference negotiations because the House did not provide comparable language in its report. DOE officials added that the provision directing the transfer to NRC did not appear in the final conference report and was not included in the appropriation legislation. Furthermore, these officials added that DOE was directed by guidance received from House Energy and Water Development Subcommittee staff to not transfer the funds. According to a senior NRC official in the Office of International Programs, the conference report included a joint explanatory statement, which directed that allocations set forth in the House and Senate reports “should be complied with unless specifically addressed to the contrary in the conference report and statement of the managers.” NRC asserts that this reinforced the intent of the original Senate report, and that without language to further clarify or to state otherwise, NRC should have received the funding as originally directed by the Senate Appropriations Committee. The conference report does not specifically address this funding issue. In addition, in 2003, NRC requested $1 million from State to support radiological source-related regulatory strengthening activities in Ukraine. Specifically, NRC proposed to develop a national registry of radiological sources and strengthen Ukraine’s overall radiological source-related laws, rules, and regulations. NRC chose Ukraine because of its relatively large inventory of high-risk radioactive sources; the stability of its existing nuclear regulatory infrastructure; and NRC’s long-standing history of assisting Ukraine’s nuclear regulatory authority, the State Nuclear Regulatory Committee of Ukraine (SNRCU). NRC requested funding for the Ukraine project from State’s Nonproliferation and Disarmament Fund. The total cost of the project was estimated at $2.2 million. The original proposal, as approved by State, stated that the project’s aim was to establish key elements of a national system to provide long-term security of high-risk radioactive sources in Ukraine by utilizing NRC’s overall expertise and experienced contractor personnel. Furthermore, the proposal stated that because NRC and its contractors had been involved in an identical program in Armenia for the previous 2 years the effort in Ukraine would capitalize on those experiences, utilizing much of that background data and materials. However, managers for NDF projects ultimately decided that State would not use NRC resources and would undertake and manage the project itself, even though the agency had no prior experience in directly supporting regulatory infrastructure development in Ukraine. According to a State official, the agency made this decision because, among other things, NRC planned to hire a contractor—the Ukrainian State Scientific and Technical Center—to manage the project, which would have increased the project’s overall cost by about 20 percent. State officials said that their approach departed from that which was originally envisioned by NRC in the original proposal in many respects. However, the NDF has always reserved the right to implement its projects as it deems appropriate. These officials added that State chose to work directly with the Ukrainian regulator instead of the State Scientific and Technical Center because, among other things, the approach streamlined oversight and accountability for project performance and reduced overhead expenses. According to the NDF manager of the Ukraine project through October 2005, the Ukraine project experienced significant delays. However, State officials told us the project is currently on track. Following a November 30, 2006 meeting with State officials to discuss our draft report, State provided us a letter from the Deputy Chairperson for SNRCU dated December 4, 2006. The letter states that SNRCU views the Ukraine project as one of the most successful and efficient international assistance projects between the United States and Ukraine and that the project was implemented in the shortest possible time period. Finally, State and NRC raised concerns when DOE with IAEA developed a set of draft regulations on the physical security of radiological sources. Although the draft regulations had not been through a formal IAEA review process, DOE had intended to distribute them during IAEA-sponsored training workshops to assist member states to strengthen regulatory controls over their sources. Specifically, NRC officials expressed significant concerns that DOE was planning to distribute unofficial guidance to countries that was in conflict with U.S. regulations. In a December 2004 memorandum to the Deputy Director General of Nuclear Safety and Security at IAEA, NRC stated that publishing interim guidance that had not been reviewed in advance, and as a result may need to be substantially modified, was neither efficient nor effective. State officials told us that their chief concern was the manner in which any such guidance would be construed abroad. These officials added that many of the specific problems associated with the original DOE draft guidance lie with internal issues regarding the process for reviewing security documents at IAEA. In addition, they said that concerns over the development of IAEA guidance on security of radioactive sources, which preceded development of the draft regulations, are long standing and that State has worked consistently with IAEA to develop and implement a consistent process for preparation and review of security guidance similar to the established process used by IAEA to develop safety guidance. Following informal discussions with State and NRC, DOE did work with the agencies to ensure that draft guidance was consistent with established domestic and international guidance and protocols. IAEA has since proposed a new Nuclear Security Series and review process, and the DOE draft regulations will now support a new IAEA Security Series document entitled “Security of Radioactive Sources,” which was coordinated with State and NRC. Our 2003 report concluded that DOE has the primary responsibility for helping other countries to strengthen controls over their radiological sources. We recommended that DOE take the lead in developing a comprehensive governmentwide plan to accomplish this goal. In addition, DOE’s consultant report stated that DOE, in its view, is the only U.S. government agency with the resources to focus solely on international source security. Similar to our recommendation, the consultant report recommended that DOE take the lead in adopting an interagency, site- specific approach to international radiological source security, including development of a long-term strategy that leveraged resources and leadership of other agencies. DOE officials said the department has not implemented these recommendations to initiate and lead a governmentwide plan for the security of radiological sources in other countries because it does not have the mandate to instruct other U.S. agencies on how to conduct their efforts, and other agencies’ programs are not within DOE’s control. However, DOE is currently taking steps, as part of the GTRI reorganization, to address several coordination issues within the department, including establishing regional points of contact to interface with other U.S. agencies to coordinate interagency efforts. The 2004 consultant report also concluded that DOE had not adequately coordinated the activities of multiple programs within DOE that are responsible for securing radiological and nuclear materials in other countries. As a result, these programs often worked at cross-purposes. For example, we visited a site in Poland that housed several nuclear facilities including a radiation waste management plant and Poland’s nuclear research reactor. Country officials managing the site told us that DOE had conducted vulnerability assessments of each of the facilities, one of which stored several high-risk radiological sources as well as spent fuel from the research reactor. Although the material was collocated in the same storage facility, we observed that the sources had been secured in a locked cage by the IRTR program, but the spent fuel had no security and was being stored unprotected in underground canisters. Figure 8 shows secured radiological sources collocated with unsecured spent fuel contained in underground storage. Polish officials told us that installation of DOE physical security upgrades at the site had been inconsistent and not adequately coordinated by DOE. Furthermore, security officials that had installed the physical security upgrades told us that the overall security in the facility was inadequate, given the types of nuclear and radioactive material being housed there. The director of the site said that he expressed concern to DOE about the lack of security of the spent nuclear fuel and requested similar upgrade improvements. However, he said that it was his understanding that DOE’s radiological program was only authorized to fund radiological source security upgrades and not the security of spent nuclear fuel, which was the responsibility of DOE’s nuclear security upgrades program. The director of the facility, and his staff, said that it was unclear to them why DOE could not concurrently secure nuclear and radiological material stored at the same site and what can and cannot be secured by different DOE entities. The director added that it sends the wrong signal to host country officials when DOE programs have such different security approaches and time frames for implementing security upgrades. Subsequent to our visit, DOE sent a letter to Polish government officials in March 2006 offering to return to Poland and provide further DOE technical and financial support to protect the nuclear material stored at the facility. Within the IRTR program, different components of the program are led primarily out of two DOE national laboratories, and we found that the laboratories, at times, applied different approaches to securing radiological sources. For example, according to a senior DOE program manager, each laboratory employs its own physical security specialists and in turn, applies its own approach to conducting vulnerability assessments and selecting physical security upgrades. During our site visits, we observed that similar types of facilities varied in terms of the types of upgrades installed and that security measures were not standardized. For example, we toured numerous oncology clinics and found that, although they housed the same equipment and radiological sources, they had received different upgrades as a result of assessments conducted by different laboratory security specialists. Specifically, teletherapy units in certain countries had fiber optic cables attached to the sources that sent alarm signals if the device was tampered with. Security specialists traveling with us at those sites told us that the cable was the key security feature for this type of device. However, during a meeting with a senior security specialist from a different laboratory, we were told that his teams do not install fiber optic cables as part of security upgrades to the same devices because the cables can break. We also found that DOE’s IRTR program components are not well- coordinated. For example, more than one program manager told us that DOE had not consistently coordinated its orphan source recovery efforts or regulatory infrastructure development assistance with physical security upgrades. According to officials responsible for managing the majority of the program’s physical security upgrade projects, IRTR program managers did not coordinate efforts that resulted in multiple visits to the same country. In their view, this caused confusion within the recipient countries because country officials had difficulty understanding why some parts of the same DOE program were being addressed separately. Officials from Sandia National Laboratories, the lead for GSSP, told us that projects were often implemented independently from physical security upgrade projects and that Sandia did not routinely coordinate its efforts with those of PNNL prior to initiating search and secure activities. PNNL officials, who brought this matter to our attention, concurred and stated that GSSP officials did not routinely consult with their physical security specialists prior to visiting countries with which PNNL had already established relationships. Furthermore, according to PNNL officials, DOE’s regulatory infrastructure development team had visited several countries without coordinating with the physical security upgrade teams. According to a DOE program manager, host country officials were frequently uncertain whether these two components were part of the same program. According to PNNL, this fragmented approach created confusion and required them to explain to country officials that the program components were meant to complement one another. The lead official for regulatory infrastructure development activities told us that future visits would be better planned to ensure that an integrated approach to source security was undertaken. Finally, we found coordination problems between IRTR and the U.S. Radiological Threat Reduction program, which is primarily responsible for domestic source recovery efforts, including repatriating U.S.-origin radiological sources in other countries. U.S. Radiological Threat Reduction program officials said there have been limited opportunities to share information or to assess the potential to coordinate international source recovery activities so as to leverage DOE resources. For example, the domestic program recently discovered a large quantity of unsecured radiological sources in South America. The sources were no longer in use and were inadequately secured. Officials managing DOE’s domestic program informed IRTR mangers of the finding and the location of the sources. However, IRTR officials declined to immediately secure the sources because the country where they were discovered, which is considered high risk, is not scheduled for IRTR upgrades until 2011. As a result, the sources will remain unsecured until the international program completes upgrades in this country. In our discussions, DOE officials recognized that coordination within the department needed to be improved and that a comprehensive and consistent approach to threat reduction efforts between its nuclear and radiological programs should be established. They acknowledged that it was inefficient for multiple DOE teams to visit the same sites as part of different programs to address multiple threat reduction activities. To that end, DOE’s recent reorganization of GTRI is designed to create a more streamlined structure that is organized geographically to address all threat reduction activities more effectively. Specifically, DOE plans to increase efficiency and improve coordination by (1) integrating multiple GTRI programs working in the same country or at the same sites; (2) redistributing workloads across the radiological and nuclear programs; and (3) improving relationships with host country officials by tailoring comprehensive strategies and incentives to more effectively meet unique country-specific conditions. DOE has improved coordination with IAEA in recent years to strengthen controls over other countries’ radiological sources and has developed several successful bilateral and multilateral partnerships with countries around the world to support and share the agency’s international efforts. IRTR’s director told us that these partnerships have helped to foster increased awareness of the security of sources through country-specific training and regional workshops. For example, with the assistance of IAEA, DOE has established a partnership with the Australian Nuclear Science and Technology Organization through which DOE has increased opportunities to conduct physical security assessments and strengthen regulatory inventories of radiological sources in Southeast Asia. Specifically, ANSTO has identified and facilitated communication with several high-risk countries, which has helped DOE gain access to countries that DOE had difficulty initiating contact with, like Vietnam. DOE has also provided funding to support, among other things, IAEA- sponsored training programs and regional workshops focusing on radiological source security. DOE also coordinated with Russia and IAEA as part of the Tripartite Initiative to conduct physical security assessments and install upgrades at 102 sites in 13 FSU countries—Armenia, Azerbaijan, Belarus, Estonia, Georgia, Kazakhstan, Kyrgyzstan, Latvia, Lithuania, Moldova, Tajikistan, Ukraine, and Uzbekistan. The objective of the Tripartite Initiative was to improve the security of dangerous radioactive sources in the FSU. We noted in our 2003 report that, in its early stages, the Tripartite Initiative was not well planned, that initial efforts were ad hoc, and a more systematic approach to program activities was needed. However, an IAEA official recently told us that coordination with DOE has improved significantly as the program evolved. Despite the success of the Tripartite Initiative, critical information gaps exist between DOE and IAEA that impede DOE’s ability to target the most vulnerable sites and countries for security improvements. First, according to DOE, IAEA has not shared with them, the countries that IAEA considers the most in need of security assistance. Second, although DOE funds IAEA appraisal missions—known as Radiation Safety and Security Infrastructure Appraisals—to assess the weaknesses in radioactive source security in IAEA member states, IAEA does not provide DOE with the findings of these missions because member state information is considered country-sensitive and confidential. The objective of these missions is to evaluate, among other things, the quality of regulatory controls countries exercise over their radiological sources. Results of the appraisals are formalized into action plans that provide the framework for subsequent IAEA assistance to improve the security of sources. Because IAEA does not provide DOE with the results of the missions, DOE is unable to effectively prioritize those sites that the missions identified to be most vulnerable. DOE officials told us that the lack of country-specific information has been an ongoing problem that limits DOE’s ability to effectively leverage its resources to maximize program impact and effectiveness. We also found that little coordination exits between DOE and the European Commission, which has resulted in the potential for overlap in assistance and duplication of efforts. Specifically, the EC provides financial support through IAEA, and on a bilateral basis, to secure radioactive sources in countries that are candidates for EU membership. EC officials told us that no formal communication exists with the United States on matters related to radioactive source security assistance, and as a result, each is largely unaware of the specific sites and locations the other is securing, or whether recipient countries are receiving too little or too much assistance. DOE officials told us that coordination with the EC has been conducted primarily at IAEA donor meetings. The EC has coordinated with IAEA to provide assistance to its member states to improve control over radiological sources. Specifically, the EC works jointly with IAEA on several action projects to strengthen the security of radiological materials used for nuclear and non-nuclear purposes, including upgrading regulatory infrastructures, installing physical security upgrades and, as appropriate, disposing of vulnerable radiological sources. As a result of these efforts, the EC has worked with IAEA in several regions, but has focused primarily on the Caucasus, Central Asia, Middle East, Africa, and Mediterranean countries. DOE has achieved noteworthy accomplishments in improving the security of radiological sources at hundreds of sites in more than 40 countries. We recognize that DOE faces a considerable challenge in securing other countries’ most dangerous radiological sources, given the number of these sources and their widespread dispersal. However, when DOE decided to expand its program beyond securing sites in Russia and the FSU, it diverted a significant portion of its limited program funding away from securing the highest priority and most dangerous radiological sources. Instead of focusing increased attention on these highest priority threats, such as RTGs, DOE allocated significant program funding resources to securing medical facilities that, in our view—as well as several DOE officials associated with the program—pose considerably less threat to U.S. security interests. While many of the RTGs cannot be removed until alternate energy sources are developed to replace them, removing as many RTGs as possible, or securing them until they can be removed, should be a critical component of DOE’s radiological threat reduction efforts. We believe that DOE’s current reorganization of its nuclear and radiological threat reduction efforts is a step in the right direction toward improving the management of the program. However, there are still many significant management issues that need to be addressed and resolved. DOE has not paid adequate attention to the long-term sustainability of the equipment, which could jeopardize the significant investment made to improve the security of radiological sources in many countries. The security equipment and upgraded storage facilities funded by DOE will require a long-term commitment by the countries to help ensure their continued use and operation, and it is not clear to us that a sustained stream of funding will be made available by DOE or by recipient countries to maintain and/or replace aging or defective equipment. Moreover, there are continuing concerns that many of the countries do not have adequate nuclear regulatory infrastructures in place to promote sustainability. Without a comprehensive sustainability plan that adequately addresses a country’s ability to reliably install and maintain upgrades and provide adequate oversight for source security, DOE risks losing a significant portion of its investment to improve the security of radiological sources in many countries. Furthermore, DOE’s decision to increase physical security requirements for sites selected for upgrades, based on revised threat protection criteria, may have significant cost implications for a program that is already facing severe budget reductions. This raises concerns because DOE has not adequately evaluated the increased costs associated with its elevated threat protection criteria. This may also be an opportune time for DOE to streamline the program, particularly in light of budget reductions. We question, for example, how certain program activities, such as the development of the Interpol database, directly contribute to the program’s core mission of securing radiological sources in other countries. There are other management issues that require DOE’s attention. First, DOE has not developed meaningful performance measurements to demonstrate the extent to which the radiological threat has been reduced as a direct result of its efforts, including measuring the impact of training and distinguishing between the types of sources secured. Second, we recognize the pool of reliable contractors to implement security projects and provide adequate training may be limited in some countries. However, many project delays could be avoided in the future if DOE developed specific selection criteria or a set minimum standard for foreign contractor qualifications. Improving radiological source security is a shared responsibility. DOE’s investment has been significant and reflects a commitment to addressing the problem. However, DOE should not underwrite the majority of the costs on behalf of the international community. Specifically, certain EU accession candidates and FSU countries, most prominently Russia, should be willing to contribute more resources to improve the security of dangerous and vulnerable sources in their own countries. In addition, DOE now has the authority to accept foreign contributions for GTRI programs from other interested countries, such as Canada, Japan, and Norway. However, gaps in communication between DOE and international partners, such as IAEA and the EC, significantly impede effective global radiological threat reduction. Finally, developing foreign countries’ nuclear regulatory organizations is a well recognized and critical component in strengthening radiological source security worldwide. NRC has a long-standing history of promoting regulatory controls in the FSU and should, in our view, play a more prominent role in this regard. DOE’s refusal to transfer $5 million from its appropriations to NRC to conduct regulatory development activities, despite the direction of the Senate Appropriations Committee, underscores NRC’s limited ability to provide international assistance, while reliant on funding from other agencies. Most of the coordination problems we identified between NRC and other agencies could have been avoided if NRC had its own stream of predictable and reliable funding for international regulatory development, rather than having to rely on DOE or State for funds. However, without a direct appropriation, NRC will continue to depend on other agencies for funds, thus increasing the likelihood that similar problems will occur in the future. To help ensure that DOE’s program focuses on securing the highest priority radiological sources and sites, we recommend that the Secretary of Energy and the Administrator of the National Nuclear Security Administration take the following two actions: Limit the number of hospitals and clinics containing radiological sources that receive security upgrades to only those deemed as the highest-risk, and To the extent possible, accelerate efforts to remove as many RTGs in Russia and, as an interim measure, improve the security of those remaining until they can be removed from service. Furthermore, we recommend that Secretary of Energy and the Administrator of the National Nuclear Security Administration take the following seven actions to improve program management: Develop a long-term sustainability plan for security upgrades that includes, among other things, future resources required to implement such a plan; Reevaluate program activities and eliminate those that do not directly contribute to securing the highest priority radiological sources in other countries; Conduct an analysis to determine the projected costs associated with increased security upgrades in light of newly proposed threat protection criteria and limit the number sites to receive increased security upgrades until such an analysis has been completed; Establish meaningful performance measurements that demonstrate real risk reduction and go beyond a quantitative listing of the number countries and sites that have received physical security upgrades; Apply a more rigorous approach to foreign contractor selection to help reduce potential project delays in the future; Seek assurances from recipient countries that plans are in place to maintain security-related equipment and facilities funded by the United States; and Develop strategies to encourage cost sharing with recipient countries, including Russia and EU accession countries. Finally, in an effort to improve coordination, the Secretary of Energy and the Administrator of the National Nuclear Security Administration, in consultation with the Secretary of State and the Chairman of the Nuclear Regulatory Commission, should work with IAEA and European Commission officials to consider ways to systematically improve information sharing to maximize and leverage resources and institutional expertise. If the Congress believes that regulatory infrastructure development is the key to the long-term sustainability of radiological source security efforts, it should consider providing NRC with authority and a direct appropriation to conduct these activities. The appropriation would be provided to NRC in lieu of providing the funds to DOE or another agency to reimburse NRC for its activities. Should the Congress decide to do so, NRC’s efforts need to be fully coordinated with those of State, DOE, and IAEA. We provided DOE and NRC with draft copies of this report for their review and comment. DOE provided written comments, which are presented as appendix III. NRC’s written comments are presented as appendix IV. NRC also provided technical comments, which we incorporated in the report. NRC neither agreed nor disagreed with our matter for congressional consideration, which would provide NRC with the legal authority and a direct appropriation to conduct international regulatory activities for radiological source security. However, NRC stated that if Congress acts upon our matter for consideration, NRC would work closely with State, relevant executive branch agencies, and IAEA to implement the program. In its written comments, DOE agreed with our conclusion that the department faced a considerable challenge in securing other countries’ most dangerous radiological sources, given the number of these sources and how widely dispersed they are. Furthermore, DOE stated that enormous amounts of dangerous material have not been secured, although the IRTR program has achieved a great deal of threat reduction in a short period of time. DOE stated that the recommendations were very helpful and would further strengthen its program. DOE also noted that it had measures in place—as a result of its reorganization of GTRI—to address program challenges and concerns that we raised, such as site prioritization; quality assurance/sustainability; coordination; and transportation. We recognized in the report that the reorganization of the program was a step in the right direction toward improving program management. However, as we noted in our report, many significant management issues still need to be addressed and resolved despite the reorganization. That is why we believe it was important to offer recommendations to improve program management and source prioritization efforts. In other comments, DOE stated that the IRTR program uses a number of factors to determine priority levels for the sites it selects to upgrade in addition to the amount of radioactivity contained in radiological sources. These other factors include (1) known terrorist threat in the country/region; (2) current level of security at the site; and (3) the proximity of the site in relationship to potential strategic targets of U.S. interest. In our report, we stated that site selection was based on a number of factors, including those specifically noted by DOE in its written comments. We also pointed out in our report that DOE’s guidance on site selection has not clearly discriminated between the different sites secured and which sites were to be considered the highest priority. We are encouraged that DOE is explicitly linking its prioritization guidelines to a site’s proximity to potential strategic targets of U.S. interest. However, it remains to be seen how consistently DOE will apply this criteria to its site selection process in the future. In a related comment, DOE stated that it will continue to accelerate RTG recoveries but must also address high priority medical and other sources. In our view, this action by DOE would be consistent with the key conclusions and recommendations in our report. Our recommendations specifically state that DOE should, to the extent possible, remove as many RTGs in Russia and limit the number of hospitals and clinics containing radiological sources that receive security upgrades to only those deemed to be the highest risk. With regard to quality assurance and program sustainability issues, DOE stated that it employs a standard process that ensures quality assurance for the security equipment that it installs. This process includes, among other things, conducting post-installation visits by technical experts for the purpose of assuring that all equipment and systems are installed as agreed upon. DOE also noted that despite these measures, it would further investigate its process to identify and implement additional improvements. We think DOE should take these steps because, as discussed in our report, we identified several problems with malfunctioning equipment and other maintenance problems at sites containing radiological sources. DOE also noted that it has a short-term sustainability program for every site that it upgrades that includes a 3-year warranty as well as preventative maintenance contracts and training for operational staff. DOE believes that we should revise the report to indicate the existence of the 3-year warranty. Our report recognizes that DOE’s program guidance calls for preventative maintenance contracts and training. We also noted that DOE provides a 3-year warranty, and we gave DOE credit for providing this coverage. Our main point remains—which DOE explicitly agreed with— that DOE has not developed a long-term sustainability plan for the equipment it has installed. Nevertheless, we clarified our report language, as appropriate, to state that DOE does have a short-term sustainability plan but has not developed a long-term plan to maintain the security upgrades completed. Regarding coordination, DOE cited numerous examples in its written comments of close cooperation with other U.S. government agencies, other DOE elements, and international partners on matters pertaining to international radiological source security. We believe the report fairly characterized DOE’s coordination efforts in each of these areas. Specifically, we noted that DOE had improved coordination with State and NRC since we reported on this matter in 2003 and has increased information-sharing with the agencies. In addition, we believe our characterization of coordination problems within the department is correct. Our evaluation was based on information provided by an independent consultant’s report as well as our own analysis of conditions we found within the department pertaining to inconsistent and, at times, inadequately coordinated efforts by different DOE programs responsible for threat reduction activities in the same country. As we noted in the report, DOE officials recognized that coordination within the agency needs to be improved and that a comprehensive and consistent approach to threat reduction efforts between nuclear and radiological threat reduction activities should be established. We also noted in the report that DOE’s September 2006 reorganization of its GTRI efforts is designed to create a more streamlined structure that is organized along three geographic regions, which could improve program coordination. On a related matter, DOE stated that we should have given IAEA an opportunity to review and address some of the issues raised in our report about limited information sharing, which impeded DOE’s ability to target the most vulnerable sites and countries for security improvements. Since this information was provided to us by DOE officials, it is unclear to us what benefit would have been achieved by providing a draft of this report to IAEA for review and comment of DOE’s views. Our report notes that DOE has, despite some information-sharing problems with IAEA, improved coordination with the agency in recent years to strengthen controls over other countries’ radiological sources. Finally, with regard to transportation of sources, DOE commented that, among other things, it had been working with the U.S. Department of Transportation, IAEA, and key IAEA donor countries to strengthen transport security regulations. We added this information to our report based on DOE’s comments. DOE also stated that it was working with Russia to enhance the security of radioactive materials, including providing cargo trucks and escort vehicles for the Moscow waste storage facility. We had already recognized this fact in the report. More broadly, however, we believe that the report accurately and fairly depicts the limitations of DOE efforts regarding transportation security. A primary source of information for our observation came directly from a DOE analysis—cited in the report—which concluded that the department was addressing transportation security on an ad-hoc basis and that the existing method of providing transportation security had serious limitations and lacked a commitment to integrate transport security into all countries participating in the IRTR program. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. We will then send copies of this report to the Secretary of Energy; the Secretary of State; the Administrator, National Nuclear Security Administration; the Chairman, Nuclear Regulatory Commission; the Director, Office of Management and Budget; and interested congressional committees. We will also make copies available to others upon request. In addition, the report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions concerning this report, please contact me at (202) 512-3841 or aloisee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs can be found on the last page of this report. Key contributors to this report include Erika D. Carter, Glen Levis, Mehrunisa Qayyum, Keith Rhodes (GAO’s Chief Technologist), and Jim Shafer. We focused our review primarily on the Department of Energy (DOE), since it is the lead federal agency for improving the security of radiological sources worldwide and provides significant funds for that purpose. We also performed work at the Nuclear Regulatory Commission (NRC) and Department of State (State) in Washington, D.C., which also provide assistance to help other countries secure their sealed radiological sources. In addition, we reviewed program-related activities and interviewed program officials from Argonne National Laboratory in Argonne, Illinois; the Los Alamos National Laboratory in Los Alamos, New Mexico; Pacific Northwest National Laboratory in Richland, Washington; Sandia National Laboratories in Albuquerque, New Mexico; the International Atomic Energy Agency (IAEA) in Vienna, Austria; and the European Commission (EC) in Brussels, Belgium. We also met with nongovernmental organizations, including, the Council on Foreign Relations and the Carnegie Endowment for International Peace. In November 2005, we attended the Trilateral Commission meeting held in the United Kingdom, which discussed international approaches to securing radiological sources against terrorism. We visited four countries to determine how DOE has implemented its program to secure radiological sources overseas. We selected these countries based on several criteria, including where DOE has spent the most funds since 2002. Overall, these four countries represented about $37.4 million, or about 35 percent, of overall program expenditures. We selected Lithuania and Poland since, among other reasons, DOE officials told us that these were model countries in securing radiological sources and implementing effective physical security upgrades. Also, we selected Russia and Georgia because they received significant program funds, totaling about $34.2 million of the $107.7 million. In addition, thousands of radiological sources are located in these two countries. In the case of Russia, it contains the majority of RTGs worldwide and operates 44 percent of all Radons in the former Soviet Union. During our review, we observed physical security upgrades at all types of sites: medical, industrial, research, storage facilities, and RTGs. For instance, we visited numerous medical and industrial sites throughout Lithuania and Poland. Specifically in Lithuania, we visited the Radiation Protection Center, Vilnius Oncology Institute Clinic, Klaipeda City Hospital, the Kaunas Oncology Clinic, and Saiuliu Oncology Hospital, as well as the Lithuanian Institute of Physics and the Maisiagala Repository. In Poland, we visited the Regionaine Centreem Kriwodawstwa I Krwiolecznictwa (Children’s Hospital) as well as the Glowny Urzad Miar (Main Measurement Office), Polytechnic Institute of Lodz, Radioisotope Center (Polatom), Geofizyka Krakow, Radioisotope Waste Management Plant in Swierk, Technical University Institute of Applied Radiation Chemistry, and the Technical Institute of Applied Physics. At each location, we interviewed facility staff who were responsible for implementing radiological source security procedures and using the monitoring equipment funded by DOE. Facility staff included—but was not limited to—doctors, clinical technicians, and other medical support staff. At each site, we met with local guards to determine how well they were trained and equipped. We also interviewed host country contractors who were responsible for installing and maintaining physical security upgrades. We also met with host government officials in both countries. In Lithuania we met with officials from the Ministry of Economy; RATA (Lithuanian Radioactive Waste Management Agency); the Radiation Protection Center (nuclear regulatory organization); and the Ministry of Environment. In Poland, we interviewed officials from the National Atomic Energy Agency (Poland’s nuclear regulator), the Department of Environmental Hygiene, and the Ministry of Health. We also visited Russia and Georgia to obtain a first-hand look at waste facilities that contain radiological sources. Specifically, we visited the Moscow Radon site at Sergiev Posad, located about 90 kilometers from Moscow and the St. Petersburg Radon site, located about 80 kilometers from St. Petersburg. While in Russia we also met with the key federal agencies responsible for radiological source management and oversight. Specifically, we met with several high-level officials from Rostechnadzor, Russia’s nuclear regulator (the Federal Environmental, Industrial and Nuclear Supervision Service of Russia); the Federal Agency for Construction and Utilities; and the Department for Nuclear and Radiation Safety at the Federal Atomic Energy Agency. Additionally, we interviewed directors of both the Moscow and St. Petersburg Radon facilities; officials of the IBRAE Institute (Russian National Academy of Sciences); and directors of VNIITFA (Russian National Technical Physics and Automation Research Institute), the designer of RTGs. Moreover, after meeting with officials from the Kurchatov Institute, which is primarily responsible for the RTG removal, we visited three sites where RTGs had been removed and replaced with alternative energy sources. In Georgia, we visited the Mtsheta national repository located at the Institute of Physics near Tbilisi, Georgia, as well as Georgia’s temporary national storage facility that stores many high-risk radiological sources, including six RTGs and a seed irradiator. Regarding Georgia’s medical sites, we also visited the National Cancer Center of Georgia and the Kutaisi Oncological Center and interviewed staff and guards who were responsible for source security. We met with officials from the Nuclear and Radiation Safety Service of the Ministry of Environmental Protection and Natural Resources (Georgia’s nuclear regulator), the Nuclear and Radiation Safety Department, the Institute of Radiobiology, and the Chamber of Control. To assess the progress of DOE’s efforts to help other countries secure their sealed radiological sources, we obtained and analyzed documentation on DOE’s International Radiological Threat Reduction Program (IRTR), including project work plans for each country and program activity; strategic planning documents; and internal briefings. For example, we reviewed DOE’s Action Plan to Secure and Control Foreign- Origin Source Materials for Radiological Dispersal Devices (April 2003), and Programmatic Guidelines for Site Prioritization and Protection Implementation (September 2006). We supplemented the documentation with interviews with senior level DOE officials responsible for implementing the IRTR program. To specifically determine the status of efforts across the 49 countries receiving DOE’s assistance, we reviewed DOE’s Project Management Information System database to construct a summary table that included factors, among other things, the number of sites completed; host country agencies and international organizations involved in radiological source security; and program accomplishments and challenges. To identify challenges DOE faces in securing sources in other countries and to assess sustainability efforts, we collected and analyzed (1) IRTR program trip reports for all countries participating in the program, and (2) testimonial evidence obtained from project managers, security specialists, and contracting officers to identify all programmatic and management challenges. Furthermore, we performed a comprehensive review and analysis of trip reports from fiscal year 2004 through fiscal year 2006. To assess current and planned program costs of U.S. programs that provide assistance to secure radiological sources in other countries, we reviewed budget documents from DOE and NRC detailing program expenditures from fiscal year 2002 through fiscal year 2006. We obtained responses from key agency database officials to a number of questions focused on data reliability, covering issues such as data-entry access and the accuracy and completeness of the data. For DOE specifically, to determine how much DOE had budgeted and spent through August 31, 2006, to secure radiological sources in other countries, we reviewed element of cost reports detailing program expenditures by country, national laboratory, and program objective per fiscal year to determine the amount spent in-country and the overall carryover of unspent and unobligated funds. Furthermore, to determine planned program costs for DOE, we reviewed DOE’s congressional budget request for fiscal year 2007 and met with senior DOE officials to learn about DOE’s plans for addressing reduced program funding. Follow-up questions were added whenever necessary. Caveats and limitations to the data were noted in the documentation, where necessary. We determined that the data were sufficiently reliable for the purposes of this report, based on work we performed to ensure the data’s reliability. To assess the extent to which coordination has occurred within DOE as well as on an interagency basis, we obtained and analyzed documents from DOE, NRC, and State regarding their radiological threat reduction and nonproliferation activities. We interviewed several senior officials at NRC, including the Senior Advisor for Nuclear Security, a senior foreign policy advisor for the Office of International Programs, and a Senior Engineer. At State, we interviewed several high-level officials, including the Senior Coordinator for Nuclear Safety from the Bureau of International Security and Nonproliferation. We also reviewed State, NRC, and DOE documents regarding Iraq work to highlight interagency coordination. To address the level of coordination with international organizations, we met with senior officials at the International Atomic Energy Agency and the European Commission, including the Director of Nuclear Safety, and a senior official from the External Relations Directorate, respectively. Finally, we met with the Director of the Nuclear and Radiation Safety Centre from the Armenian Nuclear Regulatory Authority to learn about NRC’s role in providing regulatory assistance to Armenia. We performed our review in Washington, D.C., and other locations, from November 2005 to December 2006 in accordance with generally accepted government auditing standards.
Following the terrorist attacks of September 11, 2001, U.S. and international experts raised concerns that unsecured radiological sources were vulnerable to theft and posed a significant security threat to the United States and the international community. Radioactive material is encapsulated or sealed in metal to prevent its dispersal and is commonly called a sealed radiological source. Sealed radiological sources are used worldwide for many legitimate purposes, such as medical, industrial, and agricultural applications. However, the total number of these sources in use worldwide is unknown because many countries do not systematically account for them. It is estimated that thousands of these sources have been lost, stolen, or abandoned--commonly referred to as orphan sources. If certain types of these sources were obtained by terrorists, they could be used to produce a simple and crude, but potentially dangerous, weapon--known as a radiological dispersion device, or dirty bomb. In 2001, a congressional report directed DOE to use a portion of its fiscal year 2002 supplemental appropriation to address the threat posed by dirty bombs. In response to the congressional requirement, the National Nuclear Security Administration (NNSA) established the Radiological Threat Reduction Task Force to identify, recover, and secure vulnerable, high-risk radiological sources, budgeting $20.6 million for the program in fiscal year 2002. The program initially focused on securing sources in the countries of the former Soviet Union (FSU) because DOE officials determined this region had the greatest number of vulnerable sources. In 2003, at the direction of the Secretary of Energy, DOE expanded the scope of the program to secure sealed sources worldwide, ultimately establishing the International Radiological Threat Reduction (IRTR) Program. The program's primary objective is to protect U.S. national security interests by (1) implementing rapid physical security upgrades at vulnerable sites containing radioactive sources; (2) locating, recovering, and consolidating lost or abandoned high-risk radioactive sources; and (3) supporting the development of the infrastructure necessary to sustain security enhancements and supporting regulatory controls, including the development of regional partnerships to leverage international resources. In addition, DOE has established a program to recover sealed sources produced and distributed in the United States, known as the U.S. Radiological Threat Reduction program. Part of this program's mission is to recover U.S.-origin sources on a case-by-case basis that were supplied by DOE to other countries under the Atoms for Peace program. The IRTR program is administered by NNSA with support from multiple national laboratories. The national laboratories' responsibilities include (1) assessing the physical security requirements of countries participating in the program, (2) recommending specific upgrades to strengthen radiological source security, and (3) ensuring that recommended upgrades are properly installed. In 2003, we issued a report at Congress' request focusing on U.S. and international efforts to secure sealed radiological sources. We recommended, among other things, that the Secretary of Energy take the lead in developing a comprehensive plan to strengthen controls over other countries' sealed sources. This report (1) assesses the progress the Department of Energy (DOE) has made in implementing its program to help other countries secure their sealed radiological sources, (2) identifies DOE's current and planned program costs, and (3) describes DOE's coordination with other U.S. agencies and international organizations to secure radiological sources in other countries. DOE has improved the security of hundreds of sites that contain radiological sources in more than 40 countries since the program's inception in 2002. However, many of the highest-risk and most dangerous sources still remain unsecured, particularly in Russia. In 2003, when DOE decided to broaden the program's scope beyond the former Soviet Union, it also expanded the types of sites that required security upgrades. As a result, as of September 2006, almost 70 percent of all sites secured were medical facilities, which generally contain one radiological source. Several DOE and national laboratory officials with whom we spoke questioned the benefit of upgrading such a large number of medical facilities, while higher priority sites--such as waste storage facilities and Radioisotope Thermoelectric Generators (RTGs)--remained unsecured. In addition, DOE's program does not address the transportation of radiological sources from one location to another, a security measure that DOE and international officials have identified as the most vulnerable link in the radiological supply chain. DOE has experienced numerous problems and challenges implementing its program to secure radiological sources worldwide, including a lack of cooperation from some countries and access to sites with dangerous material. Furthermore, some high-risk countries have not given DOE permission to undertake security upgrades at all. Finally, DOE has not developed a plan to ensure that countries receiving security upgrades will be able to sustain them over the long term. From its inception in 2002 through August 31, 2006, DOE spent approximately $108 million to implement its program to secure radiological sources worldwide. A majority of the funds spent--$68 million--was to (1) conduct vulnerability assessments at a variety of sites containing radiological sources; (2) install physical security upgrades at these sites, such as hardened windows and doors, motion sensors and surveillance cameras; and (3) help countries draft laws and regulations to increase security and accounting of sources. In addition, DOE provided $13.5 million to IAEA to support activities to strengthen controls over radiological sources in IAEA member states. The remainder, or $26.5 million, paid for program planning activities such as developing program guidance documents, hiring private consultants, and conducting studies. To offset anticipated shortfalls in funding, DOE plans to obtain international contributions from other countries but efforts to date have produced limited results. DOE has improved coordination with the Department of State (State) and the Nuclear Regulatory Agency (NRC) to secure radiological sources worldwide. Since we reported on this matter in 2003, DOE has involved State and NRC in its international radiological threat reduction activities more often and has increased information-sharing with the agencies. Additionally, DOE and NRC supported a State-led interagency effort to establish the Iraq Radioactive Source Regulatory Authority and develop a radiological regulatory infrastructure in Iraq. However, DOE has not always integrated its nuclear regulatory development efforts efficiently. In addition, DOE has not adequately coordinated the activities of multiple programs within the agency responsible for securing radiological and nuclear materials in other countries. DOE has generally improved coordination with IAEA to strengthen controls over other countries' radiological sources and has developed bilateral and multilateral partnerships with IAEA member states to improve their regulatory infrastructures. However, significant gaps in information-sharing between DOE and IAEA, and with the EC, have impeded DOE's ability to target the most vulnerable sites for security improvements and to avoid possible duplication of efforts.
The U.S. export control system for items with military applications is divided into two regimes. State licenses munitions items, which are designed, developed, configured, adapted, or modified for military applications, and Commerce licenses most dual-use items, which are items that have both commercial and military applications. Although the Commerce licensing system is the primary vehicle to control dual-use items, some dual-use items—those of such military sensitivity that stronger control is merited—are controlled under the State system. Commercial communications satellites are intended to facilitate civil communication functions through various media, such as voice, data, and video, but they often carry military data as well. In contrast, military communications satellites are used exclusively to transfer information related to national security and have one or more of nine characteristics that allow the satellites to be used for such purposes as providing real-time battlefield data and relaying intelligence data for specific military needs. There are similarities in the technologies used to integrate a satellite to its launch vehicle and ballistic missiles. In March 1996, the executive branch announced a change in licensing jurisdiction transferring two items—commercial jet engine hot section technologies and commercial communications satellites—from State to Commerce. In October and November 1996, Commerce and State published regulations implementing this change, with Commerce defining enhanced export controls to apply when licensing these two items. State and Commerce’s export control systems are based on fundamentally different premises. The Arms Export Control Act gives the State Department the authority to use export controls to further national security and foreign policy interests, without regard to economic or commercial interests. In contrast, the Commerce Department, as the overseer of the system created by the Export Administration Act, is charged with weighing U.S. economic and trade interests along with national security and foreign policy interests. Differences in the underlying purposes of the control systems are manifested in the systems’ structure. Key differences reflect who participates in licensing decisions, scope of controls, time frame for the decision, coverage by sanctions, and requirements for congressional notification. Participants. Commerce’s process involves five agencies—the Departments of Commerce, State, Defense, Energy, and the Arms Control and Disarmament Agency. Other agencies can be asked to review specific license applications. For most items, Commerce approves the license if there is no disagreement from reviewing agencies. When there is a disagreement, the chair of an interagency group known as the Operating Committee, a Commerce official, makes the initial decision after receiving input from the reviewing agencies. This decision can be appealed to the Advisory Committee on Export Policy, a sub-cabinet level group comprised of officials from the same five agencies, and from there to the cabinet-level Export Administration Review Board, and then to the President. In contrast, the State system commonly involves only Defense and State. While no formal multi-level review process exists, Defense officials stated that license applications for commercial communications satellites are frequently referred to other agencies, such as the Arms Control and Disarmament Agency, the National Security Agency, and the Defense Intelligence Agency. Day-to-day licensing decisions are made by the Director, Office of Defense Trade Controls, but disagreements could be discussed through organizational levels up to the Secretary of State. This difference in who makes licensing decisions underscores the weight the two systems assign to economic and commercial interests relative to national security concerns. Commerce, as the advocate for commercial interests, is the focal point for the process and makes the initial determination. Under State’s system, Commerce is not involved, underscoring the primacy of national security and foreign policy concern. Scope of Controls. The two systems also differ in the scope of controls. Commerce controls items to specific destinations for specific reasons. Some items are subject to controls targeted to former communist countries while others are controlled to prevent them from reaching countries for reasons that include antiterrorism, regional stability, and nonproliferation. In contrast, munitions items are controlled to all destinations, and State has broad authority to deny a license; it can deny a request simply with the explanation that it is against U.S. national security or foreign policy interests. Time frames. Commerce’s system is more transparent to the license applicant than State’s system. Time frames are clearly established, the review process is more predictable, and more information is shared with the exporter on the reasons for denials or conditions on the license. Congressional Notification. Exports under State’s system that exceed certain dollar thresholds (including all satellites) require notification to the Congress. Licenses for Commerce-controlled items are not subject to congressional notification, with the exception of items controlled for antiterrorism. Sanctions. The applicability of sanctions may also differ under the two export control systems. Commercial communications satellites are subject to two important types of sanctions: (1) Missile Technology Control Regime and (2) Tiananmen Square sanctions. Under Missile Technology sanctions, both State and Commerce are required to deny the export of identified, missile-related goods and technologies. Communications satellites are not so-identified but contain components that are identified as missile-related. When the United States imposed Missile Technology sanctions on China in 1993, exports of communications satellites controlled by State were not approved while exports of satellites controlled by Commerce were permitted. Under Tiananmen Square sanctions, satellites licensed by State and Commerce have identical treatment. These sanctions prohibit the export of satellites for launch from launch vehicles owned by China. However, the President can waive this prohibition if such a waiver is in the national interest. Export control of commercial communications satellites has been a matter of contention over the years among U.S. satellite manufacturers and the agencies involved in their export licensing jurisdiction—the Departments of Commerce, Defense, State, and the intelligence community. To put their views in context, I would now like to provide a brief chronology of key events in the transfer of commercial communications satellites to the Commerce Control List. As the demand for satellite launch capabilities grew, U. S. satellite manufacturers looked abroad to supplement domestic facilities. In 1988, President Reagan proposed that China be allowed to launch U.S.-origin commercial satellites. The United States and China signed an agreement in January 1989 under which China agreed to charge prices for commercial launch services similar to those charged by other competitors for launch services and to launch nine U.S.-built satellites through 1994. Following the June 1989 crackdown by the Chinese government on peaceful political demonstrations on Tiananmen Square in Beijing, President Bush imposed export sanctions on China. President Bush subsequently waived these sanctions for the export of three U.S.-origin satellites for launch from China. In February 1990, Congress passed the Tiananmen Square sanctions law (P.L. 101-246) to suspend certain programs and activities relating to the Peoples Republic of China. This law also suspends the export of U.S. satellites for launch from Chinese-owned vehicles. In November 1990, the President ordered the removal of dual-use items from State’s munitions list unless significant U.S. national security interests would be jeopardized. This action was designed to bring U.S. controls in line with the industrial (dual-use) list maintained by the Coordinating Committee for Multilateral Export Controls, a multilateral export control arrangement. Commercial communications satellites were contained on the industrial list. Pursuant to this order, State led an interagency review, including officials from Defense, Commerce, and other agencies, to determine which dual-use items should be removed from State’s munitions list and transferred to Commerce’s jurisdiction. The review was conducted between December 1990 and April 1992. As part of this review, a working group identified and established performance parameters for the militarily-sensitive characteristics of communications satellites. During the review period, industry groups supported moving commercial communications satellites, ground stations, and associated technical data to the Commerce Control List. In October 1992, State issued regulations transferring jurisdiction of some commercial communications satellites to Commerce. These regulations also defined what satellites remained under its control by listing nine militarily sensitive characteristics that, if included in a commercial communications satellite, warranted their control on State’s munitions list. (These characteristics are discussed in app. I.) The regulations noted that parts, components, accessories, attachments, and associated equipment (including ground support equipment) remained on the munitions list, but could be included on a Commerce license application if the equipment was needed for a specific launch of a commercial communications satellite controlled by Commerce. After the transfer, Commerce noted that this limited transfer only partially fulfilled the President’s 1990 directive. Export controls over commercial communications satellites were again taken up in September 1993. The Trade Promotion Coordinating Committee, an interagency body composed of representatives from most government agencies, issued a report in which it committed the administration to review dual-use items on the munitions list, such as commercial communications satellites, to expedite moving them to the Commerce Control List. Industry continued to support the move of commercial communications satellites, ground stations, and associated technical data from State to Commerce control. In April 1995, the Chairman of the President’s Export Council met with the Secretary of State to discuss issues related to the jurisdiction of commercial communications satellites and the impact of sanctions that affected the export and launch of satellites to China. Also in April 1995, State formed the Comsat Technical Working Group to examine export controls over commercial communications satellites and to recommend whether the militarily sensitive characteristics of satellites could be more narrowly defined consistent with national security and intelligence interests. This interagency group included representatives from State, Defense, the National Security Agency, Commerce, the National Aeronautics and Space Administration, and the intelligence community. The interagency group reported its findings in October 1995. Consistent with the findings of the Comsat Technical Working Group and with the input from industry through the Defense Trade Advisory Group, the Secretary of State denied the transfer of commercial communications satellites to Commerce in October 1995 and approved a plan to narrow, but not eliminate, State’s jurisdiction over these satellites. Unhappy with State’s decision to retain jurisdiction of commercial communications satellites, Commerce appealed it to the National Security Council and the President. In March 1996, the President, after additional interagency meetings on this issue, announced the transfer of export control authority for all commercial communications satellites from State to Commerce. A key part of these discussions was the issuance of an executive order in December 1995 that modified Commerce’s procedures for processing licenses. This executive order required Commerce to refer all licenses to State, Defense, Energy, and the Arms Control and Disarmament Agency. This change addressed a key shortcoming that we had reported on in several prior reviews. In response to the concerns of Defense and State officials about this transfer, Commerce agreed to add additional controls to exports of satellites designed to mirror the stronger controls already applied to items on State’s munitions list. Changes included the establishment of a new control, the significant item control, for the export of sensitive satellites to all destinations. The policy objective of this control—consistency with U.S. national security and foreign policy interests—is broadly stated. The functioning of the Operating Committee, the interagency group that makes the initial licensing determination, was also modified. This change required that the licensing decision for these satellites be made by majority vote of the five agencies, rather than by the chair of the Committee. Satellites were also exempted from other provisions governing the licensing of most items on the Commerce Control List. In October and November 1996, Commerce and State published changes to their respective regulations, formally transferring licensing jurisdiction for commercial communications satellites with militarily sensitive characteristics from State to Commerce. Additional procedural changes were implemented through an executive order and a presidential decision directive issued in October 1996. According to Commerce officials, the President’s March 1996 decision reflected Commerce’s long-held position that all commercial communications satellites should be under its jurisdiction. Commerce argued that these satellites are intended for commercial end use and are therefore not munitions. Commerce maintained that transferring jurisdiction to the dual-use list would also make U.S. controls consistent with treatment of these items under multilateral export control regimes. Manufacturers of satellites supported the transfer of commercial communications satellites to the Commerce Control List. They believed that such satellites are intended for commercial end use and are therefore not munitions subject to State’s licensing process. They also believed that the Commerce process was more responsive to business due to its clearly established time frames and predictability of the licensing process. Under State’s jurisdiction, the satellites were subject to Missile Technology sanctions requiring denial of exports and to congressional notifications. Satellite manufacturers also expressed the view that some of the militarily sensitive characteristics of communications satellites are no longer unique to military satellites. State and Defense point out that the basis for including items on the munitions list is the sensitivity of the item and whether it has been specifically designed for military applications, not how the item will be used. These officials have expressed concern about the potential for improvements in missile capabilities through disclosure of technical data to integrate the satellite with the launch vehicle and the operational capability that specific satellite characteristics could give a potential adversary. The process of planning a satellite launch takes several months, and there is concern that technical discussions between U.S. and foreign representatives may lead to the transfer of information on militarily sensitive components. Defense and State officials said they were particularly concerned about the technologies to integrate the satellite to the launch vehicle because this technology can also be applied to launch ballistic missiles to improve their performance and reliability. Accelerometers, kick motors, separation mechanisms, and attitude control systems are examples of equipment used in both satellites and ballistic missiles. State officials said that such equipment and technology merit control for national security reasons. They also expressed concern about the operational capability that specific characteristics, in particular antijam capability, crosslinks, and baseband processing, could give a potential adversary. No export license application for a satellite launch has been denied under either the State or Commerce systems. Therefore, the conditions attached to the license are particularly significant. Exports of U.S. satellites for launch in China are governed by a government-to-government agreement addressing technology safeguards. This agreement establishes the basic authorities for the U.S. government to institute controls intended to ensure that sensitive technology is not inadvertently transferred to China. This agreement is one of three government-to-government agreements with China on satellites. The others address pricing and liability issues. During our 1997 review and in recent discussions, officials pointed to two principal safeguard mechanisms to protect technologies. These safeguard mechanisms include technology transfer control plans and the presence of Defense Department monitors during the launch of the satellites. State or Commerce may choose to include these safeguards as conditions to licenses. Technology transfer control plans are prepared by the exporter and approved by Defense. The plans outline the internal control procedures the company will follow to prevent the disclosure of technology except as authorized for the integration and launch of the satellite. These plans typically include requirements for the presence of Defense monitors at technical meetings with Chinese officials as well as procedures to ensure that Defense reviews and clears the release of any technical data provided by the company. Defense monitors at the launch help ensure that the physical security over the satellite is maintained and monitor any on-site technical meetings between the company and Chinese officials. Authority for these monitors to perform this work in China is granted under the terms of the government-to-government safeguards agreement. Additional government control may be exercised on technology transfers through State’s licensing of technical assistance and technical data. State technical assistance agreements detail the types of information that can be provided and give Defense an opportunity to scrutinize the type of information being considered for export. Technical assistance agreements, however, are not always required for satellite exports to China. While such licenses were required for satellites licensed for export by State, Commerce-licensed satellites do not have a separate technical assistance licensing requirement. The addition of new controls over satellites transferred to Commerce’s jurisdiction in 1996 addressed some of the key areas where the Commerce procedures are less stringent than those at State. There remain, however, differences in how the export of satellites are controlled under these new procedures. Congressional notification requirements no longer apply, although Congress is currently notified because of the Tiananmen waiver process. Sanctions do not always apply to items under Commerce’s jurisdiction. For example, under the 1993 Missile Technology sanctions, sanctions were not imposed on satellites that included missile-related components. Defense’s power to influence the decision-making process has diminished since the transfer. When under State jurisdiction, State and Defense officials stated that State would routinely defer to the recommendations of Defense if national security concerns are raised. Under Commerce jurisdiction, Defense must now either persuade a majority of other agencies to agree with its position to stop an export or escalate their objection to the cabinet-level Export Administration Review Board, an event that has not occurred in recent years. Technical information may not be as clearly controlled under the Commerce system. Unlike State, Commerce does not require a company to obtain an export license to market a satellite. Commerce regulations also do not have a separate export commodity control category for technical data, leaving it unclear how this information is licensed. Commerce has informed one large satellite maker that some of this technical data does not require an individual license. Without clear licensing requirements for technical information, Defense does not have an opportunity to review the need for monitors and safeguards or attend technical meetings to ensure that sensitive information is not inadvertently disclosed. The additional controls applied to the militarily sensitive commercial communications satellites transferred to Commerce’s control in 1996 were not applied to the satellites transferred in 1993. These satellites are therefore reviewed under the normal interagency process and are subject to more limited controls. This concludes our statement. We appreciate the opportunity to provide this information for the record of this hearing. Antennas and/or antenna systems with the ability to respond to incoming interference by adaptively reducing antenna gain in the direction of the interference. Ensures that communications remain open during crises. Allows a satellite to receive incoming signals. An antenna aimed at a spot roughly 200 nautical miles in diameter or less can become a sensitive radio listening device and is very effective against ground-based interception efforts. Provide the capability to transmit data from one satellite to another without going through a ground station. Permits the expansion of regional satellite communication coverage to global coverage and provides source-to-destination connectivity that can span the globe. It is very difficult to intercept and permits very secure communications. Allows a satellite to switch from one frequency to another with an on-board processor. On-board switching can provide resistance to jamming of signals. Scramble signals and data transmitted to and from a satellite. Allows telemetry and control of a satellite, which provides positive control and denies unauthorized access. Certain encryption capabilities have significant intelligence features important to the National Security Agency. Provide protection from natural and man-made radiation environment in space, which can be harmful to electronic circuits. Permit a satellite to operate in nuclear war environments and may enable its electronic components to survive a nuclear explosion. Allows rapid changes when the satellite is on orbit. Military maneuvers require that a satellite have the capability to accelerate faster than a certain speed to cover new areas of interest. Provides a low probability that a signal will be intercepted. High performance pointing capabilities provide superior intelligence-gathering capabilities. Used to deliver satellites to their proper orbital slots. If the motors can be restarted, the satellite can execute military maneuvers because it can move to cover new areas. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the evolution of export controls on commercial communications satellites, focusing on: (1) key elements in the export control systems of the Department of Commerce and the Department of State; (2) how export controls for commercial satellites have evolved over the years; (3) the concerns and issues debated over the transfer of commercial communications satellites to the export licensing jurisdiction of Commerce; and (4) the safeguards that may be applied to commercial satellite exports. GAO noted that: (1) the U.S. export control system--comprised of both the Commerce and State systems--is about managing risk; (2) exports to some countries involve less risk than to other countries and exports of some items involve less risk than others; (3) the planning of a satellite launch with technical discussions and exchanges of information taking place over several months, involves risk no matter which agency is the licensing authority; (4) recently, events have focused on the appropriateness of Commerce jurisdiction over communication satellites; (5) by design, Commerce's system gives greater weight to economic and commercial concerns, implicitly accepting greater security risks; and (6) State's system gives primacy to national security and foreign policy concerns, lessening--but not eliminating--the risk of damage to U.S. national security interests.
The congressional budget justification for IRS presents funding and FTE information at two levels: (1) appropriation account and (2) budget activity. In addition, it provides descriptions of program activities within each budget activity. IRS has four appropriation accounts: Enforcement. This account provides funding for the examination of tax returns, both domestic and international; administrative and judicial settlement of taxpayer appeals of examination findings; technical rulings; monitoring of employee pension plans; determination of qualifications of organizations seeking tax-exempt status; examination of tax returns of exempt organizations; enforcement of statutes relating to detection and investigation of criminal violations of the internal revenue laws; identification of underreporting of tax obligations; securing of unfiled tax returns; and collecting unpaid accounts. Operations Support. This account provides funding for overall planning, direction, and support for the IRS, including shared service support related to facilities services, rent payments, printing, postage, and security. This appropriation funds headquarters policy and management activities such as corporate support for strategic planning; communications and liaison; finance; human resources; equity, diversity and inclusion; research and statistics of income; and necessary expenses for information systems and telecommunication support, including development, security, and maintenance of IRS information systems. Taxpayer Services. This account provides funding for taxpayer service activities and programs. This includes printing forms and publications, processing tax returns and related documents, offering filing and account services, ensuring the availability of taxpayer assistance, and providing taxpayer advocacy services. Business Systems Modernization. This account provides resources for the planning and capital asset acquisition of IT to modernize the IRS business systems. Budget activities divide appropriation accounts into additional functions. For example, the Enforcement appropriation is broken into three budget activities: Investigations, Exam and Collections, and Regulatory. Each budget activity, in turn, has multiple program activities. For example, Exam and Collections has 18 program activities. The congressional justification presents how budget resources could be allocated to appropriation accounts. IRS is restricted from reprogramming funds within appropriation accounts without committee approval if the reprogramming will augment existing programs, projects, or activities (PPA) in excess of $5 million or 10 percent, whichever is less. IRS refers to its PPAs as “budget activities.” IRS generally notifies Congress if it expects to reprogram funds from one budget activity to another. However, IRS is not restricted from shifting resources between program activities, which are activities within a budget activity and not subject to reprogramming restrictions. Further, while IRS cannot transfer resources from one appropriation account to another without specific statutory authority to do so, IRS could shift resources between any of the 18 program activities within the Exam and Collection budget activity without Congressional approval or notification. Within the congressional justification, IRS also details its strategic goals and objectives for the agency. According to the fiscal year 2015 congressional justification for IRS, the IRS strategic plan guides program and budget decisions and supports the Department of the Treasury Fiscal Year 2014 to 2017 Strategic Plan and the Agency Priority Goal of expanding the availability and improving the quality of customer service options. The congressional justification also details information on requested program increases (new program initiatives). The information includes why the funds are needed, how they will be used, and the number of staff by position needed to do the work. Some of the new program initiatives also include projected return on investment (ROI) calculations—the projected additional revenue collected, divided by the projected cost. For IRS, projected ROI is an estimate of how the agency expects a program initiative to perform. IRS defines three types of ROI: Revenue producing. The initiative yields direct, measureable results through enforcement activities. Revenue protecting. The initiative prevents the issuance of fraudulent refunds to persons posing as a taxpayer and resolves issues prior to issuing a refund. Revenue enhancing. The initiative leads to improved revenue collection through improvements in case selection, issue identification, and enforcement case treatment. In recent years, some of the new program initiatives have been predicated on a program integrity cap adjustment. Congress passes these adjustments to allow additional funding above discretionary spending limits for certain activities that are expected to generate benefits that exceed cost. Of the 22 initiatives in the IRS fiscal year 2015 budget request, 17 are predicated on a program integrity cap adjustment. IRS continues to implement four major new laws passed in recent years: PPACA. This law reforms the private insurance market and expands health coverage to the uninsured. IRS is responsible for implementing new provisions including new taxes, tax credits, and information reporting requirements. Merchant card reporting. This law requires payment settlement entities (e.g., banks) to report fiscal year information and the gross amount of reportable payment transactions (i.e., payment card and third party network transactions) to IRS. Cost basis reporting. This law requires investment brokers to report the adjusted cost basis for certain publicly traded securities and whether a gain or loss is short or long-term. Foreign Account Tax Compliance Act (FATCA). This law adds reporting and other requirements relating to income from assets held abroad by: (1) requiring foreign financial and nonfinancial institutions to withhold 30 percent of payments made to such institutions by U.S. individuals, unless such institutions agree to disclose the identity of such individuals and report on their bank transactions, and (2) denying a tax deduction for interest on non-registered bonds issued outside the United States. IRS’s appropriations have ranged from a low of $10.2 billion in fiscal year 2005 to a high of $12.1 billion in fiscal year 2010. The fiscal year 2010 high was in part a result of a program integrity cap adjustment of $890 million. As we have previously reported, IRS’s budget declined approximately $900 million between fiscal years 2010 and 2014. These cuts reduced IRS’s budget to below fiscal year 2009 funding levels. The largest budget reduction ($618 million) occurred in fiscal year 2013 because of sequestration ($594 million) and a 0.2 percent rescission ($24 million). In fiscal year 2014, IRS’s budget remained essentially flat, with an increase of $92 million for the improvement of identification and prevention of refund fraud and identity theft, and international and offshore compliance issues. For fiscal year 2015, the President requested $12.5 billion for IRS, an increase of 10.5 percent ($1.2 billion) over fiscal year 2014 appropriations. See figure 1. Staffing declined by more than 10,000 FTEs between fiscal years 2010 and 2014, which reduced IRS’s total FTEs to below fiscal year 2009 levels. For fiscal year 2015, the President requested an increase of about 7,000 FTEs over fiscal year 2014, which would bring IRS above fiscal year 2012 levels. See figure 2. Amidst these budget reductions, IRS’s performance has declined in Enforcement and Taxpayer Services. IRS officials anticipate continued declines in fiscal year 2015. As shown in figure 3, IRS lowered its return examination and collection coverage targets. For example, the original audit coverage target for individual examinations was 1 percent for fiscal year 2014, but was lowered to 0.8 percent in the fiscal year 2015 congressional justification. IRS’s performance in assisting taxpayers has also suffered, including telephone and correspondence services. Between fiscal years 2009 and 2013, telephone level of service—the percentage of callers seeking live assistance and receiving it—fluctuated between 61 percent and 74 percent. As of April 19, 2014, after the filing season ended, fiscal year 2014 telephone level of service was 67 percent. In addition, the average amount of time a taxpayer had to wait to talk to a telephone assistor increased since fiscal year 2009—from 8.8 minutes to 17.0 minutes, as of April 19, 2014. Moreover, from fiscal years 2009 through 2013, overage correspondence—paper correspondence to which IRS has not responded within 45 days of receipt—increased from 25 percent to 47 percent. However, according to IRS, the 2014 filing season was relatively smooth, which IRS attributes to reduced call volume and fewer tax law changes. IRS attributes performance declines to sequestration and furloughs. For example, the number of individual, high-income, and business return audits fell from fiscal year 2012 to 2013 because fewer staff were available to conduct audits. Similarly, reports by the IRS Oversight Board and the National Taxpayer Advocate (NTA) attribute performance declines in part to reduced funding. As staffing fell, IRS’s workload increased in some areas as a result of statutory mandates and priority programs. For fiscal year 2014, IRS allocated 9 percent of its FTEs to these mandates and programs, including those to implement four new laws—PPACA, merchant card reporting, cost basis reporting, and FATCA. In addition, because instances of identity theft are more frequent, IRS has significantly increased resources devoted to refund fraud, which includes identity theft. Since IRS began tracking refund fraud in fiscal year 2011, it has more than quadrupled the numbers of FTEs allocated to refund fraud, from 1,018 in fiscal year 2011 to 4,146 FTEs in fiscal year 2014 (about 5 percent of its total workforce). Table 1 shows the shift in FTEs to legislative mandates and priority programs. To reduce spending to address sequestration and other budget cuts, IRS has taken steps that include the following key efforts: Staff attrition and furloughs. IRS absorbed the majority of its budget cuts through staff attrition and furloughs. Of the $567 million in savings realized in fiscal year 2013, $311 million resulted from attrition, hiring freezes, and furloughs. IRS implemented an exception- only hiring freeze in December 2010. In fiscal year 2013 alone, FTEs decreased by more than 2.7 percent—2,978—from fiscal year 2012 as a result of attrition, according to IRS. Of these savings, IRS generated $88.5 million from 3 furlough days. Reduced travel and training. IRS substantially reduced employee training and related travel. Since fiscal year 2010, IRS reduced training costs by 83 percent and training-related travel costs by 87 percent by limiting employee travel and training to mission-critical projects. From fiscal years 2009 through 2013, IRS reduced the amount it spent per employee on training from $1,600 to $200, according to data provided by IRS. Reductions of this magnitude are not sustainable, according to IRS officials. For fiscal year 2013, IRS reported a savings of $56.2 million by reducing agency-wide, non- technical training and non-case related travel. IRS attributes the savings to a greater use of technology and deferral of some standard training. IRS noted that in fiscal year 2013 the training budget was cut so the funds would be available to minimize furlough days and thereby maintain service to taxpayers. E-file savings. It costs IRS much less to process electronic returns than paper returns. IRS reported receiving 3.5 million fewer paper returns in fiscal year 2013 and 4.2 million more electronically filed returns, compared to fiscal year 2012. According to the fiscal year 2015 congressional justification, IRS realized savings of $11 million and 209 FTEs in fiscal year 2013 as a result of these changes, which exceeded projected savings by $2.4 million and 32 FTEs. Space reduction. IRS reported it completed 89 projects in fiscal year 2013 to reduce the amount of physical space it uses. As a result, IRS will rent 557,779 fewer square feet, which will save $15.7 million in rent annually. Further, IRS approved 30 more such projects in fiscal year 2014, which IRS estimates will reduce its space needs by another 350,000 square feet, producing an additional savings of about $11.3 million annually. In addition, to address budget cuts, IRS has taken steps that it anticipates will produce savings in fiscal years 2014 and 2015. Telephone and walk-in services. In fiscal year 2014, IRS reduced or eliminated certain telephone and walk-in services. These actions are consistent with our finding in December 2012 that IRS needed to dramatically revise its strategy for providing telephone and correspondence services, and that incremental efficiency gains would not be enough to reverse service declines. Specifically, IRS limited telephone assistance to only basic tax law questions during the filing season and reassigned assistors to work account-related inquiries; launched the “Get Transcript” tool, which allows taxpayers to obtain a viewable and printable transcript on www.irs.gov, and redirected taxpayers to automated tools for additional guidance; redirected refund-related inquiries to automated services and did not answer refund inquiries until 21 days after a tax return was filed electronically or 6 weeks after a return was filed by paper (unless the automated service directed the taxpayer to contact IRS); limited access to the Practitioner Priority Services line to only those practitioners working tax account issues; limited live assistance and redirected requests for domestic employer identification numbers to IRS’s online tool; and eliminated free return preparation and reduced other services at IRS’s walk-in sites. IT reductions. In part because of a lack of funding, IRS put a hold on aspects of two major IT projects: Information Reporting and Document Matching (IRDM). This program is intended to improve business taxpayer compliance by matching business information returns (e.g., Form 1099-K) with individual tax returns to detect potential income underreporting. According to IRS IT officials, during the hold IRS will determine the best case management tool to use to meet IRDM’s program requirements. IRS plans to leverage an off-the-shelf solution, which IRS believes will be more cost effective than building one. Return Review Program (RRP). When fully deployed, RRP is expected to make use of leading-edge technology to detect, resolve, and prevent fraud. IRS expects to complete a plan to move beyond the hold on RRP in the summer of 2014. Officials said the plan will help inform IRS’s funding needs for RRP. IRS does not outline a framework for how it should operate in an uncertain budget environment in either its fiscal year 2009 through 2013 strategic plan or its fiscal year 2014 goals and objectives included in the fiscal year 2015 congressional justification for IRS. While IRS has taken steps in the short-term to address budget cuts (including sequestration)— such as exception-only hiring and reducing or eliminating some telephone and walk-in services—it does not have a strategy to operate in an uncertain budget environment over the long-term. Further, we found that IRS absorbed the majority of cuts as a result of actions that were not part of a long-term strategy. According to officials from IRS’s Office of Corporate Budget, IRS has absorbed the majority of cuts through attrition and, as a result, the programs that experienced the most attrition were the programs that absorbed the most cuts. In fiscal years 2012 through 2013, IRS absorbed roughly $516 million through attrition, nearly $383 million from enforcement activities, according to data provided by IRS. Officials also noted IRS has taken large budget cuts over the last several years. As IRS continues to operate in an uncertain budget environment, it continues to examine and prioritize what it can cut and what it can postpone. According to the NTA, the budget cuts have caused IRS to operate in crisis mode—reacting to external changes and putting out fires. The NTA also stated that IRS has no overall, cohesive strategy to understand what kind of services it should be providing and when to eliminate a service. In response to our questions about IRS’s lack of a long-term strategy, officials noted that one key problem has been the extensive turnover in senior leadership across the agency. Nineteen of 46 members—over 40 percent—of IRS’s Senior Executive Team were on-board on or after October 1, 2013, including the new IRS Commissioner, who took office in December 2013. However, IRS began an extensive review of its base budget in April 2014 in order to collect detailed spending and performance information for each of its distinct programs. Once IRS completes this review, management plans to determine the proper funding levels for each program and how to realign existing resources. While this is an important start, it may not be sufficient over the long term. There are a number of indications that funding is unlikely to soon return to the growing and high budget levels IRS experienced in fiscal years 2010 and 2011. First, our work on the nation’s long-term fiscal outlook shows that growth in spending for federal health care and retirement programs will place increasing pressure on discretionary spending. Second, in May 2014, OMB generally required a 2 percent reduction in agencies’ fiscal year 2016 budget submission. Third, under the Budget Control Act of 2011 (BCA), sequestration of discretionary appropriations could occur in any fiscal year through 2021. In addition, as shown in figure 4, for the past several fiscal years IRS has been appropriated less than it requested, with a difference of 13 percent in fiscal year 2014. Looking back over a 10-year horizon provides further evidence that funding levels provided in fiscal years 2010 and 2011 were an exception and that the recent decline might be viewed more as a return to normal. All of these factors indicate that it is increasingly important for agencies to have long- term plans to address uncertain budget environments. In addition, according to Executive Order No. 13576 and OMB guidance, agencies are to develop strategies for operating effectively and efficiently in an uncertain budget environment. We have previously reported on steps that agencies can take to do this. These include reexamining programs, related processes, and organizational structures to determine whether they are effectively or efficiently achieving their mission. They also include streamlining or consolidating management or operational processes and functions to make them more cost effective. We have also made recommendations to IRS on how it can more strategically manage operations in specific areas, such as developing a long-term strategy to improve its online services. A long-term strategy that includes a fundamental reexamination of IRS’s operations, programs, and organizational structure could help it operate more effectively and efficiently in an environment of budget uncertainty. Although IRS has taken steps in the short term, they may not be sufficient to stem performance declines. While IRS was able to mitigate some of the effects of sequestration, it is unlikely that the steps taken are sustainable strategies in the long term. Moreover, the extent of turnover in senior leadership makes it even more important to have a long-term guiding strategy. IRS calculates a projected return on investment (ROI)—the projected revenue divided by the projected cost—for most proposed new enforcement initiatives cited in the congressional justification. In the calculation, revenue represents the expected direct revenue effects of the initiatives, but does not reflect any indirect revenue effects that may result if the added enforcement activities increase voluntary compliance among taxpayers. Cost includes all enforcement-related resources used by IRS, but does not incorporate compliance costs imposed on taxpayers. For fiscal year 2015, IRS proposed 13 enforcement initiatives (see table 2). IRS estimated a projected ROI for 8 of the initiatives. In addition, 11 initiatives are predicated on funding from a program integrity cap adjustment. IRS calculates ROI over a 3-year period (based on projections of associated costs and expected revenue) because it expects that staff working on the initiative will reach their full performance after 3 years. As of May 2014, IRS has not determined the extent to which the resources actually devoted to enforcement initiatives differ from the 3- year projections of resources described in prior budget justifications. Nor has IRS computed actual rates of return on the resources that were actually used for specific initiatives. As a result, neither IRS nor others know whether the program initiatives it proposed, once implemented, were as productive as expected. We have reported that calculating actual ROI would be a significant step forward in determining how initiatives are performing and whether calculations for projected ROIs need to be adjusted. IRS, following a recommendation from the Treasury Inspector General for Tax Administration, is conducting a feasibility study to identify the necessary steps and challenges that exist to measure actual revenue for enforcement initiatives. IRS expects to complete the study in December 2014. IRS officials noted that one reason they do not calculate actual ROI for enforcement initiatives is because it is difficult to determine which staff have actually worked on a particular initiative over a multi-year period. According to IRS officials, IRS creates exam plans each year based on division goals and priority areas, which can change on an annual basis. As such, staff may be initially hired to work on a specific initiative, but may work on another initiative the following year. According to IRS officials, different individuals may work on an initiative during the course of a year, rather than specific individuals being dedicated to the initiative 100 percent of the time. When staff initially hired to work on an initiative are transferred to another initiative, there can be significant differences between the scope of the proposed and implemented versions of the initiative. In our prior work, we noted that if IRS falls behind on its hiring plans, staff may not reach their full potential as quickly as anticipated, which ultimately could delay projected revenue gains. In addition, officials from the IRS Corporate Budget Office cited difficulties in tracking ROI-related information on funded initiatives, because of difficulties in matching information between IRS systems for formulating and executing its budget. IRS has established funded program codes (previously known as internal order codes) as a mechanism to track specific initiatives—such as IRDM and merchant card/cost basis reporting—which could provide actual ROI, but IRS mainly uses these codes to track information that is of interest to congressional staff. Comparing projected ROI to actual ROI is consistent with project management concepts, internal control standards, OMB guidance, and our prior work on performance management. In addition, budget decision makers in Congress have stated that a comparison of projected ROI to actual ROI initiatives would be useful information to have. Until IRS calculates actual ROI for its implemented enforcement initiatives, it is not accountable for the ROI it projected when requesting funding. In June 2009, we recommended that IRS calculate actual ROI because it provides information about how programs and initiatives are performing and the possible need to adjust ROI methodologies to more effectively project results of future proposed initiatives. We have recognized that calculating actual ROI is challenging and existing data may not be complete. Based on our prior recommendation, IRS began reporting actual ROI in the fiscal year 2014 congressional justification for the following three major enforcement programs: Examination. This program conducts examinations of tax returns of individual taxpayers, businesses, and other types of organizations to verify that the tax reported is correct. Collection. This program collects delinquent taxes and secures delinquent tax returns through the appropriate use of enforcement tools such as liens, levies, seizures of assets, installment agreements, offers in compromise, substitute for returns, and 26 U.S.C. § 6020(b), which allows the IRS to prepare returns if a taxpayer fails to file a return. Automated Underreporter. This program matches taxpayer information returns against data reported on individual tax returns and verifies the information to identify any discrepancies. According to IRS officials, IRS calculates the actual ROI for these programs because total enforcement revenue collected is allocated to these three categories. IRS reports the actual ROI in the congressional justification, based on our previous recommendation and congressional budget decision makers’ interest in this information. However, IRS does not use this actual ROI information for resource allocation decisions because the data do not reflect marginal ROI, nor do they include the indirect effects of IRS enforcement activities on voluntary compliance. Moreover, because of the time required to work enforcement cases to completion, the costs associated with assessing and collecting additional revenues may be spread over more than one fiscal year and are not necessarily aligned with revenue collections that may occur in subsequent years. These limitations are critical concerns, which is why we recommended in 2012 that IRS develop estimates of marginal direct revenue and marginal direct costs within each enforcement program. We further recommended that IRS explore the potential of estimating the marginal influence of enforcement activity on voluntary compliance. However, given that these data gaps could take considerable time to fill, we also demonstrated how IRS planners could in the meantime review actual (average) ROI across different enforcement programs and across different groups of cases within these programs to better inform resource allocation decisions. As we noted, the planners could consider this information in combination with their professional judgment relating to other relevant but currently unquantifiable factors. In fact, while limited in scope, we demonstrated how a hypothetical shift in resources could potentially increase direct revenue by $1 billion annually, without significant negative effects on voluntary compliance. The development of data relating to marginal ROI would also enable IRS to improve its projections of ROI for enforcement initiatives, particularly those that expand coverage for specific taxpayer categories. IRS currently knows little about the extent to which marginal yields differ from average yields within specific return categories; consequently, it does not know the extent to which its ROI projections, which are based on average yields, may be overstated. By collecting the data needed to compute actual ROI for expanded coverage enforcement initiatives, IRS would be able to determine whether marginal returns are significantly lower than average returns; thereby, IRS would have better information for deciding whether the expanded coverage should be maintained, cut back, or expanded further in future years. Actual productivity information and better ROI projections can also help budget decision makers in determining funding levels for IRS. When enacted, PPACA gave IRS responsibility for implementing a number of provisions. From fiscal year 2010 through 2012 and again in 2014, IRS received funds for PPACA implementation activities from the Department of Health and Human Services Health Insurance Reform Implementation Fund (HIRIF), which is used to fund the federal administrative expenses to implement PPACA. To implement PPACA provisions in fiscal years 2013 and 2014, IRS requested $360.5 million and $439.6 million, respectively, but it did not receive this funding. As we previously reported, of the $12.5 billion IRS requested for fiscal year 2015, $451.7 million is to implement PPACA. Table 4 shows the amounts IRS obligated for PPACA implementation from fiscal years 2010 to 2014. For fiscal year 2014, IRS identified seven initiatives related to PPACA implementation. Table 5 shows how much IRS plans to spend on these initiatives in fiscal year 2014 and the amount spent through the middle of fiscal year 2014. Of the total planned spending for fiscal year 2014— $400.6 million—IRS has obligated $87.1 million through March 31, 2014. Table 6 shows how much IRS requested to implement PPACA initiatives in fiscal year 2015. As of December 2012, IRS estimated the full life-cycle costs for PPACA to be $1.89 billion from fiscal years 2010 through 2026. According to IRS officials, an update to the December 2012 PPACA cost estimate was completed in September 2013. The updated cost estimate is still under review and it will not be released until January 2015. As a result, we cannot assess the progress made on implementing our prior recommendations. In September 2013, we reported that IRS made progress in updating its initial October 2010 PPACA cost estimate, in accordance with best practices as identified in the GAO Cost Estimating and Assessment Guide. However, we found that IRS could take further steps to improve the estimate’s accuracy and credibility. At that time, IRS agreed with the majority of the actions we recommended. IRS officials reported that the September 2013 PPACA cost estimate does address some of the recommended actions. For example, IRS officials said they implemented our recommendation to document how cost drivers are selected for future sensitivity analyses. However, until the September 2013 cost estimate is available for our review, we cannot verify these changes. Since September 2013, IRS at least partially implemented three recommendations made in our prior reviews of its budget justification documents. Two of the implemented recommendations resulted in more transparent reporting and greater accessibility to IT investments data. Table 7 summarizes the recommendations implemented by IRS. Four budget-related recommendations to IRS remain open. As shown in table 8, IRS is planning to implement two of the open recommendations, but only partially agrees with the two remaining open recommendations. Appendix II identifies all of our products with open matters for Congress and recommendations to IRS regarding tax administration that could result in potential savings or increased revenues. IRS is operating in an uncertain budget environment, and there are implications that significant funding increases are unlikely in the foreseeable future. While IRS has taken steps in recent years to reduce spending, those steps were generally reactionary. Now is the time to fundamentally reexamine its operations, programs, and organizational structure to determine how to most efficiently and effectively accomplish its mission. Creating such a roadmap will help ensure that IRS effectively provides taxpayers with services to make voluntary compliance easier and to enforce the tax laws, so that taxpayers fulfill their tax responsibilities. A roadmap will also help provide continuity in light of any future turnover among senior executives. As IRS examines its operations, it will need multiple sources of data on which to base its assessment and to ultimately make resource allocation decisions. While not necessarily the only factor in making resource allocation decisions, actual ROI could provide insight on the productivity of a program and inform estimating techniques for new initiatives. In turn, better estimates of ROI and actual productivity information can help budget decision makers in determining funding levels for IRS. Comparing projected and actual ROIs can help hold managers and the IRS accountable for the funding received. We recommend the Commissioner of Internal Revenue take the following three actions: As a result of turnover in IRS’s Senior Executive Team and in order to enhance budget planning and improve decision making and accountability, we recommend IRS develop a long-term strategy to address operations amidst an uncertain budget environment. As part of the strategy, IRS should take steps to improve its efficiency, including (1) reexamining programs, related processes, and organizational structures to determine whether they are effectively and efficiently achieving the IRS mission, and (2) streamlining or consolidating management or operational processes and functions to make them more cost- effective. Because ROI provides insights on the productivity of a program and is one important factor in making resource allocation decisions, we recommend IRS calculate actual ROI for implemented initiatives, compare the actual ROI to projected ROI, and provide the comparison to budget decision makers for initiatives where IRS allocated resources; and use actual ROI calculations as part of resource allocation decisions. We provided a draft of this report to the Commissioner of Internal Revenue for comment. In written comments, reproduced in appendix III, IRS agreed with our recommendations. Regarding our recommendation to develop a long-term strategy to address operations amidst an uncertain budget environment, IRS noted that it is conducting a review of its budget base to ensure resources are aligned with IRS Strategic Goals, Objectives, and Priorities, and will adjust its fiscal year 2015 budget as a result of this review. Regarding our ROI recommendations, IRS agreed that ROI is one of several factors relevant to making resource allocation decisions. However, IRS noted that determining the impact of an initiative will always rely on estimates, as the results of an initiative are the difference between actual results and what would have occurred in the absence of the initiative, which cannot be measured. We agree that projected benefits are based on estimates, but it is possible to develop an estimate for what would have occurred in the absence of the initiative. Moreover, this report emphasizes comparisons between projected and actual ROI, not comparing actual results with what would have occurred in the absence of an initiative. Comparing projected and actual ROI is important to assist budget decision makers in determining funding levels for IRS and to hold managers and the IRS accountable for the funding received. IRS also provided technical comments on our draft report, which we incorporated as appropriate. We plan to send copies of this report to the Chairman and Ranking Members of other Senate and House committees and subcommittees that have appropriation, authorization, and oversight responsibilities for IRS. We are also sending copies to the Commissioner of Internal Revenue, the Secretary of the Treasury, and the Chairman of the IRS Oversight Board. The report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact us at (202) 512-9110 or mctiguej@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix IV. We were asked to review the President’s fiscal year 2015 budget request for the Internal Revenue Service (IRS). The objectives of this report were to (1) assess IRS’s strategy to address the budget cuts, including sequestration; (2) assess any new use of return on investment (ROI) analysis; (3) summarize requested funding and actual and planned spending for the Patient Protection and Affordable Care Act (PPACA) and assess the updated PPACA information technology cost estimate; and (4) describe IRS’s progress in implementing open GAO budget- related recommendations and list any GAO open matters for Congress and recommendations for executive action that could result in potential savings or increased revenue for IRS. To assess IRS’s strategy to address the budget cuts, including sequestration, we summarized IRS’s budget, staffing, performance, and workload trends for fiscal years 2009 through 2015. We selected fiscal year 2009 as a starting point because fiscal year 2009 was the year prior to IRS’s highest appropriation in recent years. To summarize these trends, we analyzed congressional justifications for IRS for fiscal years 2009 through 2015 and performance data for key IRS operations. When we describe full-time equivalent (FTE) trends, the actual number of FTEs represents total number of hours worked (or to be worked) divided by the number of compensable hours applicable to each fiscal year, according to definition provided by IRS. Enacted FTEs represents the number of FTEs provided for the enacted budget. Requested FTEs represents the number of FTEs requested in the President’s Budget for IRS. We also analyzed FTE and workload trends on IRS identified priority programs, which include new statutory mandates. We reviewed reports by the IRS Oversight Board, Treasury Inspector General for Taxpayer Administration (TIGTA), and the National Taxpayer Advocate (NTA), which provided some possible reasons for performance declines. In addition, we analyzed proposed program initiatives in the fiscal year 2015 congressional justification and determined which ones were predicated on a program integrity cap adjustment, which Congress passes to allow additional funding above discretionary spending limits. We also analyzed steps IRS has taken to address budget cuts, including sequestration. To analyze these steps, we reviewed the fiscal year 2015 congressional justification, including information on savings initiatives realized or proposed from fiscal years 2009 through 2015 and prior GAO work on sequestration. We also interviewed IRS officials in IRS offices of Corporate Budget and Information Technology and the NTA. We compared IRS efforts to address uncertain budgets to Executive Order 13576 “Delivering an Efficient, Effective, and Accountable Government” and Office of Management and Budget (OMB) guidance. In addition, we compared IRS efforts to leading practices in government performance and efficiency. To assess any new use of ROI analysis, we reviewed the fiscal year 2015 congressional justification to identify IRS’s projected and actual ROI; obtained data from IRS to determine the new initiatives by projected revenue, cost, and type; and summarized projected ROI by type for new enforcement initiatives and actual ROI for IRS’s three enforcement programs—Examination, Collection, and Automated Underreporter. As part of our analysis of proposed program initiatives, we also identified any new enforcement initiative predicated on a program integrity cap adjustment. We reviewed OMB and IRS guidance on new program initiatives and interviewed IRS Corporate Budget officials to determine how IRS calculates ROI for proposed initiatives and the extent to which it compares estimated ROI to actual ROI for funded new initiatives. We reviewed prior GAO and TIGTA reports to identify criteria for measuring actual revenue of new program initiatives and for use in resource allocation decisions. We also interviewed IRS Research, Analysis, and Statistics officials to discuss the IRS feasibility study on ROI calculations and challenges in calculating actual ROI for new enforcement initiatives, as well as current enforcement programs. To provide funding and spending information on PPACA, we summarized data by appropriation account since program inception (fiscal year 2010) and analyzed new information on actual and planned spending for fiscal year 2014, based on congressional budget justifications for IRS and on spending plans. We could not assess the most recent update of the PPACA cost estimate—September 2013—because it is still under review. However, to determine IRS’s progress in updating its cost estimate (consistent with our recommendation in 2013 to follow best practices identified in the GAO Cost Estimating and Assessment Guide) we interviewed and obtained some documentation from IRS officials. To describe IRS’s progress in implementing our prior budget-related recommendations, we obtained information from various IRS officials and reviewed relevant documentation, including the fiscal year 2015 congressional budget justification and the IRS Joint Audit Management Enterprise System reports, which track IRS actions taken to implement GAO recommendations. We then determined which recommendations were implemented. We also searched the GAO Engagement Reporting System to identify prior GAO open matters for Congress and recommendations for executive action. We then identified whether the recommendation or matter could result in a potential savings, increased revenue, or indirect financial benefits for IRS. We interviewed IRS officials and determined that the data presented in this report were sufficiently reliable for our purposes. We conducted our work in Washington, D.C., where key IRS officials involved with the budget are located. We conducted this performance audit from November 2013 to June 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: GAO Products with Open Matters for Congress and Recommendations to the Internal Revenue Service (IRS) In the 37 GAO products listed below, as of March 13, 2014, there are 10 open matters for Congress and 72 open recommendations to IRS. Of these matters and recommendations, 20 increase revenue, 15 increase savings, 10 increase both savings and revenue, and 37 have indirect financial benefits. In addition to the contact named above, Libby Mixon, Assistant Director, Jehan Chase, Pawnee A. Davis, Mary Evans, Charles Fox, Suzanne Heimbach, Carol Henn, Felicia Lopez, Paul Middleton, Ed Nannenhorn, Sabine Paul, Mark Ryan, Erinn L. Sauer, Cynthia Saunders, Tamara Stenzel, and Jim Wozny made major contributions to this report.
The financing of the federal government depends largely upon the IRS's ability to collect taxes, including providing taxpayer services that make voluntary compliance easier and enforcing tax laws to ensure compliance with tax responsibilities. For fiscal year 2015, the President requested a $12.5 billion budget for IRS, a 10.5 percent increase over the fiscal year 2014 budget. Because of the size of IRS's budget and the importance of its service and compliance programs for all taxpayers, GAO was asked to review the fiscal year 2015 budget request for IRS. (In April 2014, GAO reported interim information on IRS's budget.) Among other things, this report assesses IRS's (1) strategy to address budget cuts and (2) use of ROI analysis. To conduct this work, GAO reviewed the fiscal year 2015 budget justification, IRS and OMB budget guidance, and IRS workload and performance data from fiscal years 2009 to 2015. GAO also interviewed IRS officials and the National Taxpayer Advocate. Since fiscal year 2010, the Internal Revenue Service (IRS) budget has declined by about $900 million. As a result, funding is below fiscal year 2009 levels. Staffing has also declined by about 10,000 full-time equivalents since fiscal year 2010, and performance has been uneven. For example, between fiscal years 2009 and 2013, the percentage of callers seeking live assistance and receiving it fluctuated between 61 percent and 74 percent. IRS took some steps to address budget cuts, such as reduced travel and training. IRS's strategic plan does not address managing budget uncertainty, although there are several indicators that funding will be constrained for the foreseeable future. For example, in May 2014, the Office of Management and Budget (OMB) generally required a 2 percent reduction in agencies' fiscal year 2016 budget submission. OMB guidance also requires agencies to develop strategies for operating in an uncertain budget environment. According to IRS, extensive senior leadership turnover has contributed to the lack of a long-term strategy. Without a strategy, IRS may not be able to operate effectively and efficiently in an uncertain budget environment. For fiscal year 2015, IRS calculated projected return on investment (ROI) for most of its enforcement initiatives. However, due to limitations—such as estimating the indirect effect coverage has on voluntary compliance—IRS does not calculate actual ROI or use it for resource decisions. These limitations are important, which is why GAO recommended in 2012 that IRS explore developing such estimates. Given that these limitations could take time to address, GAO demonstrated how IRS could use existing ROI data to review disparities across different enforcement programs to inform resource allocation decisions. Comparing projected and actual ROI is consistent with OMB guidance. While not the only factor in making resource decisions, actual ROI could provide useful insights on the productivity of a program. GAO recommends that IRS (1) develop a long-term strategy to manage uncertain budgets, and (2) calculate actual ROI for implemented initiatives, compare actual ROI to projected ROI, and use the data to inform resource decisions. IRS agreed with GAO's recommendations, noting that it initiated a review of its base budget to ensure resources are aligned with its strategic plan and ROI is one of several factors relevant to making resource allocation decisions.
The U.S. Department of Agriculture (USDA) is the federal government’s principal provider of loans used to assist the nation’s rural areas in developing their utility infrastructure. Through RUS, USDA finances the construction, improvement, and repair of electrical, telecommunications, and water and waste disposal systems. RUS provides credit assistance through direct loans and through repayment guarantees on loans made by other lenders. Established by the Federal Crop Insurance Reform and the Department of Agriculture Reorganization Act of 1994, RUS administers the electricity and telecommunications programs that were operated by the former Rural Electrification Administration and the water and waste disposal programs that were operated by the former Rural Development Administration. As of September 30, 1996, which was the most recent information available to us at the time of our review, RUS’ entire portfolio of loans—including direct and guaranteed electricity, telecommunications, and water and waste disposal loans—totaled $42.5 billion. Electricity loans made up over $32 billion, or 75 percent of this total. Most of the RUS electric loans and loan guarantees were made during the late 1970s and early 1980s. For example, from fiscal years 1979 through 1983, RUS approved loans and loan guarantees of about $29 billion, whereas during fiscal years 1992 through 1996, it approved a total of about $4 billion in electric loans and loan guarantees. RUS electricity loans were made primarily to rural electric cooperatives; more than 99 percent of the borrowers with electricity loans are nonprofit cooperatives. These cooperatives are either Generation and Transmission (G&T) cooperatives or distribution cooperatives. A G&T cooperative is a nonprofit rural electric system whose chief function is to produce and sell electric power on a wholesale basis to its owners, who consist of distribution cooperatives and other G&T cooperatives. A distribution cooperative sells the electricity it buys from a G&T cooperative to its owners, the retail customers. As of September 30, 1996, the bulk of the electric loan portfolio was made up of loans to the G&Ts. The principal outstanding on these G&T loans was approximately $22.5 billion, about 70 percent of the portfolio. Distribution borrowers made up the remaining 30 percent of the electric portfolio. At the time of our review, there were 55 G&T borrowers and 782 distribution borrowers. Our review focused on the G&T loans since they make up the majority, in terms of dollars, of the portfolio and generally pose the greatest risk of loss to the federal government. The federal government incurs financial losses when borrowers are unable to repay the balances owed on their loans and the government does not have sufficient legal recourse against the borrowers to recover the full loan amounts. In all instances, G&T loans are collateralized; however, RUS has never foreclosed on a loan. RUS generally has been unable to successfully pursue foreclosure once the borrower files for bankruptcy because the borrower’s assets are protected until the proceedings are settled. In addition, in recent cases where debt was written off, the government forgave the debt and therefore did not attempt to pursue further collection. Under Department of Justice (DOJ) authority, during fiscal year 1996 and through July 31, 1997, RUS wrote off about $1.5 billion of loans to rural electric cooperatives. The most significant write-offs relate to two G&T loans. In fiscal year 1996, one G&T made a lump sum payment of $237 million to RUS in exchange for RUS writing off and forgiving the remaining $982 million of its RUS loan balance. The G&T’s financial problems began with its involvement as a minority-share owner in a nuclear project that experienced lengthy delays in construction as well as severe cost escalation. When construction of the plant began in 1976, its total cost was projected to be $430 million. However, according to the Congressional Research Service, the actual cost at completion in 1987 was $3.9 billion as measured in nominal terms (1987 dollars). These cost increases are due in part to changes in Nuclear Regulatory Commission health and safety regulations after the Three Mile Island accident. The remaining portion is generally due to inflation over time and capitalization of interest during the delays. The borrower defaulted in 1986, had its debt restructured in 1993, and finally had its debt partially forgiven in September 1996. This borrower is no longer in the RUS program. In the early part of fiscal year 1997, another G&T borrower made a lump sum payment of approximately $238.5 million in exchange for forgiveness of its remaining $502 million loan balance. The G&T and its six distribution cooperatives borrowed the $238.5 million from a private lender, the National Rural Utilities Cooperative Finance Corporation. The G&T had originally borrowed from RUS to build a two-unit coal-fired generating plant and to finance a coal mine that would supply fuel for the generating plant. The plant was built in anticipation of industrial development from the emerging shale oil industry. However, the growth in demand did not materialize, and there was no market for the power. Although the borrower had its debt restructured in 1989, it still experienced financial difficulties due to a depressed power market. RUS and DOJ decided that the best way to resolve the matter was to accept a partial lump sum payment on the debt rather than force the borrower into bankruptcy. The borrower and its member distribution cooperatives are no longer in the RUS program. It is probable that RUS will have additional loan write-offs and therefore that the federal government will incur further losses in the short term from loans to borrowers that have been identified as financially stressed by RUS management. At the time of our review, RUS reports indicated that about $10.5 billion of the $22.5 billion in G&T debt was owed by 13 financially stressed G&T borrowers. Of these, four borrowers with about $7 billion in outstanding debt were in bankruptcy. The remaining nine borrowers had investments in uneconomical generating plants and/or had formally requested financial assistance in the form of debt forgiveness from RUS. According to RUS officials, these plant investments became uneconomical because of cost overruns, continuing changes in regulations, and soaring interest rates. These investments resulted in high levels of debt and debt-servicing requirements, making power produced from these plants expensive. (See attachment I for a list and brief discussion of these borrowers.) Since cooperatives are nonprofit organizations, there is little or no profit built into their rate structure, which helps keep electric rates as low as possible. However, the lack of retained profit generally means the cooperatives have little or no cash reserves to draw upon. Thus, when cash flow is insufficient to service debt, cooperatives must raise electricity rates and/or cut other costs enough to service debt obligations. If they are unable to do so, they may default on their government loans. This was the scenario for the previously discussed write-offs in fiscal year 1996 and through July 31, 1997. Additional write-offs are expected to occur. For example, according to RUS officials, at the time of our review, the agency was considering writing off as much as $3 billion of the total $4.2 billion debt owed by Cajun Electric, a RUS borrower that has been in bankruptcy since December 1994. Cajun Electric filed for bankruptcy protection after the Louisiana Public Service Commission disapproved a requested rate increase and instead lowered rates to a level that reduced the amount of revenues available to Cajun to make annual debt service payments. Several factors contributed to Cajun’s heavy debt, including its investment in a nuclear facility that experienced construction cost overruns and its excess electricity generation capacity resulting from overestimation of the demand for electricity in Louisiana during the 1980s. In addition to the financially stressed loans, RUS had loans outstanding to G&T borrowers that were considered viable by RUS but may become stressed in the future due to high costs and competitive or regulatory pressures. We believe it is probable that the federal government will eventually incur losses on some of these G&T loans. We believe the future viability of these G&T borrowers will be determined based on their ability to be competitive in a deregulated market. In order to assess the ability of RUS cooperatives to withstand competitive pressures, we focused on production costs for 33 of the 55 G&T borrowers with loans outstanding of about $11.7 billion as of September 30, 1996. We excluded 9 G&Ts that only transmit electricity and the 13 financially stressed borrowers discussed above. Our analysis showed that for 27 of the 33 G&T borrowers, production costs were higher in their respective regional markets than investor-owned utilities, and that for 17 of the 33, production costs were higher than publicly-owned generating utilities. The relatively high average production costs indicate that the majority of G&Ts may have difficulty competing in a deregulated market. RUS officials told us that several borrowers have already asked RUS to renegotiate or write off their debt because they do not expect to be competitive due to high costs. RUS officials stated that they will not write off debt solely to make borrowers more competitive. As with the financially stressed borrowers, some of the G&T borrowers considered viable by RUS at the time of our work had high debt costs because of investments in uneconomical plants. In addition, according to RUS officials, there are two unique factors that cause cost disparity between the G&Ts and their competition. One factor is the sparser customer density per mile for cooperatives and the corresponding high cost of providing service to the rural areas. A second factor has been the inability to refinance higher cost Federal Financing Bank (FFB) debt when lower interest rates have prevailed. However, RUS officials said that recent legislative changes that enable cooperatives to refinance FFB debt with a penalty may help align G&T interest rates with those of the investor-owned utilities. In the short term, G&Ts will likely be shielded from competition because of the all-requirements wholesale power contracts between the G&T and their member distribution cooperatives. With rare exceptions, these long-term contracts obligate the distribution cooperatives to purchase all of their respective power needs from the G&T. In fact, RUS requires the terms of the contracts to be at least as long as the G&T loan repayment period. However, wholesale power contracts have been challenged recently in the courts by several distribution cooperatives because of the obligation to purchase expensive G&T power. According to RUS officials, one bankrupt G&T’s member cooperatives challenged their wholesale power contracts in court in order to obtain less expensive power. RUS officials believe that the long-term contracts will come under increased scrutiny and potential renegotiation or court challenges as other sources of less expensive power become available. Wholesale rates under these contracts are set by a G&T’s board of directors with approval from RUS. In states whose commissions regulate cooperatives, the cooperatives must file requests with the commissions for rate increases or decreases. Several of the currently bankrupt borrowers were denied requests for rate increases from state commissions. However, RUS officials indicated they do not expect G&Ts to pursue rate increases as a means to recover their costs because of the recognition of declining rates in a competitive environment. RUS officials also acknowledge that borrowers with high costs are likely to request debt forgiveness as a means to reduce costs in order to be competitive in the future. As discussed above, denials of requested rate increases by state commissions culminated in several G&Ts filing for bankruptcy. Eighteen of the RUS G&T borrowers operate in states where regulatory commissions must approve rate increases. These commissions may deny a request for a rate increase if they believe such an increase will have a negative impact on the region. According to RUS officials, some commissions have denied rate increases to cover the costs of projects that the commissions had previously approved for construction. Therefore, G&Ts with high costs may be likely candidates to default on their RUS loans, even without direct competitive pressures. In summary, in the last several years, through July 1997, RUS has experienced loan write-offs of $1.5 billion. Additional write-offs related to the $10.5 billion in loans identified by RUS as financially stressed as of the time of our review are likely in the near term. And finally, RUS has loans outstanding to G&T borrowers that are currently considered viable by RUS that may become stressed in the future due to high production costs and competitive or regulatory pressures. We believe it is probable that the federal government will incur losses eventually on some of these G&T loans. The future viability of these G&T loans will be determined based in part on the RUS cooperatives’ ability to be competitive in a deregulated market. Mr. Chairman, that concludes my statement. I would be happy to answer any questions you or other Members of the Subcommittee may have. The following is a list and brief discussion of each of the 13 financially stressed G&T borrowers. This information is as of September 30, 1996; therefore, changes may have occurred subsequent to our review. Borrower A: Invested in construction of a nuclear plant that experienced cost overruns and was never completed. The state commission denied rate increases to cover the cost of the cooperative’s investment in the plant. The borrower defaulted on its loan in 1984 and declared bankruptcy in 1985. The bankruptcy proceedings have been in court for 12 years and are still not completely resolved. Borrower B: Made an investment in a nuclear plant that proved to be uneconomical. While this borrower does not appear to be currently experiencing financial difficulties, RUS considers it financially stressed because it has formally requested financial assistance due to impending competitive pressures. Borrower C: Made an investment in a nuclear plant that proved to be uneconomical. While this borrower does not appear to be currently experiencing financial difficulties, RUS considers it financially stressed because it has formally requested financial assistance due to impending competitive pressures. Borrower D: Uses primarily coal-fired generation. The borrower overbuilt due to anticipated growth in electricity demand that did not occur. During construction of a new plant, economic conditions in the area changed and demand for electricity dropped, which resulted in less revenue than predicted from the plant. The cooperative was repeatedly denied rate increases to cover the cost of its plants by the state commission. Borrower E: Has a small percentage share in a nuclear plant that proved to be uneconomical. The borrower has substantially higher electricity rates than the investor-owned utilities in its region. The cooperative has been denied rate increases to cover its losses by the state commission. Although the borrower has had some of its debt refinanced, it is still experiencing financial difficulties. Borrower F: A G&T with primarily coal-fired generating plants that overbuilt due to anticipated industrial growth related to two large aluminum smelting companies. When aluminum prices dropped in the early 1980s, the companies threatened to move their operations if the cooperative did not lower electricity rates. The state commission denied rate increases over the fear of losing these industries. RUS restructured the borrower’s debt in 1987 and 1990. The cooperative filed for bankruptcy in September 1996 because its other creditors were unwilling to negotiate. Borrower G: Built a coal-fired plant and invested in a nuclear plant in the mid-1970s that was completed late and experienced construction cost overruns. Several factors contributed to the cooperative’s heavy debt, including excess electricity generation construction resulting from overestimation of the demand for electricity during the 1980s. The new capacity was intended to serve a growth in demand that did not materialize. The state commission disapproved a rate increase and instead lowered rates to a level that precluded full debt service coverage. The commission also refused to support a restructuring agreement that included a significant RUS loan write-off. The rate increase was requested by the cooperative because of its high costs. The borrower filed for bankruptcy in December 1994. Borrower H: Invested in construction of a nuclear plant that proved to be uneconomical. The project was completed 10 years late and over budget. In addition, there was a dramatic drop in the demand for electricity in the cooperative’s service area, and the state commission would not allow rate increases to recover capital investment. The borrower had its debt restructured in 1987; however, it is requesting additional financial assistance due to anticipated competitive pressure. A final settlement between RUS and the borrower was reached in June 1997. The borrower was expected to receive a write-off of $165 million. The final payment and related debt write-off were scheduled to occur December 30, 1997. Borrower I: Invested in a clean-burning coal plant that experienced severe cost overruns. The borrower has substantially higher electricity rates than the investor-owned utilities in its region. The state commission has denied the cooperative’s request for rate increases. The borrower had some of its debt refinanced, but it is still experiencing financial difficulty. Borrower J: Invested in a nuclear plant that proved to be uneconomical. The plant was completed late, which resulted in cost overruns. As a result, the cooperative’s wholesale power rates are very high. The borrower has requested debt restructuring due to its high cost of production and anticipated competitive pressure. Borrower K: Invested in a nuclear plant that proved to be uneconomical. The plant was completed late, which resulted in severe cost overruns. The cooperative’s wholesale power rates are very high, which has resulted in extreme unrest in the member distribution cooperatives. The borrower is surrounded by investor-owned utilities with lower wholesale rates. In addition, the borrower’s system is very difficult and expensive to maintain and experiences frequent power outages. The borrower has requested financial assistance because of anticipated competitive pressure. Borrower L: Invested in a nuclear plant that proved to be uneconomical. The plant was completed late, which resulted in severe cost overruns. The cooperative has only five member distribution cooperatives, which makes it difficult to cover its high production costs. This borrower chose not to declare bankruptcy and is seeking financial assistance. This borrower has refinanced its debt to lower its interest rate, but is still experiencing financial difficulty and has requested additional financial assistance. Borrower M: Invested in a nuclear plant that proved to be uneconomical. In addition, the cooperative had a stagnant customer base in the 1980s. RUS tried to negotiate a restructuring agreement, but the state commission denied two separate plans. In April 1996, the borrower filed for bankruptcy. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the Rural Utilities Service's (RUS) electric loan portfolio and the potential for future losses to the federal government from these loans, focusing on: (1) substantial write-offs of loans to rural electric cooperatives; (2) likely additional losses to the federal government from loans to financially stressed borrowers; and (3) the potential for future losses from viable loans that may become stressed in the future due to high production costs and competitive or regulatory pressures. GAO noted that: (1) under Department of Justice authority, during fiscal year (FY) 1996 and through July 31, 1997, RUS wrote off about $1.5 billion of loans to rural electric cooperatives; (2) the most significant write-offs relate to two generation and transmission (G&T) loans; (3) it is probable that RUS will have additional loan write-offs and therefore that the federal government will incur losses in the short term from loans to borrowers that have been identified as financially stressed by RUS management; (4) at the time of GAO's review, RUS reports indicated that about $10.5 billion of the $22.5 billion in G&T debt was owed by 13 financially stressed G&T borrowers; (5) in addition to the financially stressed loans, RUS had loans outstanding to G&T borrowers that were considered viable by RUS but may become stressed in the future due to high costs and competitive or regulatory pressures; (6) GAO believes it is probable that the federal government will eventually incur losses on some of these G&T loans; (7) GAO also believes the future viability of these G&T borrowers will be determined based on their ability to be competitive in a deregulated market; (8) relatively high average production costs indicate that the majority of G&Ts may have difficulty competing in a deregulated market; (9) as with the financially stressed borrowers, some of the G&T borrowers considered viable by RUS at the time of GAO's work had high debt costs because of investments in uneconomical plants; (10) in the short term, G&Ts will likely be shielded from competition because of the all-requirements wholesale power contracts between the G&T and their member distribution cooperatives; (11) wholesale rates under these contracts are set by a G&T's board of directors with approval from RUS; (12) in states whose commissions regulate cooperatives, the cooperative must file a request with the commission for a rate increase or decrease; (13) these commissions may deny a request for a rate increase if they believe such an increase will have a negative impact on the region; (14) denials of requested rate increases by state commissions culminated in several G&Ts filing bankruptcy; (15) according to RUS officials, some commissions have denied a rate increase to cover the cost of projects that the commission had previously approved for construction; and (16) therefore, G&Ts with high costs may be likely candidates to default on their RUS loans, even without direct competitive pressures.
In implementing decimal pricing, regulators hoped to improve the quality of U.S. stock and option markets. The quality of a market can be assessed using various characteristics, but the trading costs that investors incur when they execute orders are a key aspect of market quality. Trading costs are generally measured differently for retail and institutional investors. In addition to the commission charges to paid broker-dealers that execute trades, the other primary trading cost for retail investors, who typically trade no more than a few hundred shares at a time, is measured by the spread, which is the difference between the best quoted “bid” and “ask” prices that prevail at the time the order is executed. The bid price is the best price at which market participants are willing to buy shares, and the ask price is the best price at which market participants are willing to sell shares. The spread represents the cost of trading for small orders because if an investor buys shares at the ask price and then immediately sells them at the bid price, the resulting loss or cost is represented by the size of the spread. Because institutional orders are generally much larger than retail orders and completing one order can require multiple trades executed at varying prices, spreads are not generally used to measure institutional investors’ trading costs. Instead, the components of trading costs for large institutional investors, who often seek to buy or sell large blocks of shares such as 50,000 or 1 million shares, include the order’s market impact, broker commissions paid, and exchange fees incurred, among other things. An order’s market impact is the extent to which the security changes in price after the investor begins trading. For example, if the price of a stock begins to rise in reaction to the increased demand after an investor begins executing trades to complete a large order, the average price at which the investor’s total order is executed will be higher than the stock’s price would have been without the order. In addition to trading costs, decimal pricing may have affected several other aspects of market quality, including liquidity, transparency, and price volatility. Liquidity. Liquid markets have many buyers and sellers willing to trade and have sufficient shares to execute trades quickly without markedly affecting share prices. Generally, the more liquid the overall market or markets for particular stocks are, the lower the market impact of any individual orders. Small orders for very liquid stocks will have minimal market impact and lower trading costs. However, larger orders, particularly for less liquid stocks, can affect prices more and thus have greater market impact and higher trading costs. Transparency. When markets are transparent, the number and prices of available shares are readily disclosed to all market participants, and prices and volumes of executed trades are promptly disseminated. A key factor that can affect market participants’ perceptions of market transparency is the volume of shares publicly displayed as available at the best quoted bid and ask prices, as well as at points around these prices—known as market depth. Markets with small numbers of shares displayed in comparison to the size of investors’ typical orders seem less transparent to investors because they have less information that can help them specify the price and size of their own orders so as to execute trades with minimal trading costs. Price volatility. Price volatility is a measure of the frequency of price changes as well as a measure of the amount by which prices change over a period of time. Highly volatile markets typically disadvantage investors that execute trades with less certainty of the prices they will receive. Conversely, market intermediaries, such as broker-dealers, can benefit from highly volatile markets because they may be able to earn more revenue from trading more frequently as prices rise and fall. The trading that occurs on U.S. securities markets is facilitated by broker- dealers that act as market intermediaries. These intermediaries perform different functions depending on the type of trading that occurs in each market. On markets that use centrally located trading floors to conduct trading, such as the New York Stock Exchange (NYSE), trading occurs primarily through certain broker-dealer firms that have been designated as specialists for particular stocks. These specialists are obligated to maintain fair and orderly markets by buying shares from or selling shares to the other broker-dealers who present orders from customers on the trading floor or through the electronic order routing systems used by the exchange. Interacting with the specialists on the trading floor are employees from large broker-dealer firms that receive orders routed from these firms’ offices around the country. In addition, specialists receive orders from staff from small, independent broker-dealer firms who work only on the floor. In contrast, trading of the stocks listed on the NASDAQ Stock Market (NASDAQ), which does not have a central physical trading location, is conducted through electronic systems operated by broker-dealers acting as market makers or by alternative trading venues. For particular stocks, market makers enter quotes indicating the prices at which these firms are simultaneously willing to buy from or sell shares to other broker-dealers into NASDAQ’s electronic system. The NASDAQ system displays these quotes to all other broker-dealers that are registered to trade on that market. Much of the trading in NASDAQ stocks now also takes place in alternative trading venues, including electronic communication networks (ECN), which are registered as broker-dealers and electronically match the orders they receive from their customers, much like an exchange. At the same time that decimal pricing was being implemented, other changes were also occurring in the marketplace. For example, in 1997, SEC enacted new rules regarding how market makers and specialists must handle the orders they received from their customers, including requiring firms to display these orders to the market when their prices are better than those currently offered by that broker. These rules facilitated the growth of additional trading venues such as the ECNs, which compete with the established markets, such as NYSE and NASDAQ, for trading volumes. The increased use of computerized trading has also provided alternative mechanisms for trading and reduced the role of specialists, market makers, and other intermediaries in the trading process. In addition, after rising significantly during the late 1990s, U.S. stock prices experienced several years of declines, affecting trading costs and market intermediary profits. Facing lower investment returns, institutional investors and professional traders have focused more on reducing trading costs to improve those returns. Regulators also began placing greater emphasis on institutional investors’ duty to obtain the best execution for their trades, further increasing the pressure on these firms to better manage their trading costs. Trading costs for both retail and institutional investors fell after the implementation of decimal pricing and the corresponding reduction in tick size. While decimalization appears to have helped to lower these costs, other factors—such as the multiyear downturn in stock prices—also likely contributed to these cost reductions. Although trading costs and other market quality measures improved after decimal pricing’s implementation, another measure—the transparency of U.S. stock markets—declined following the reduction in tick size in 2001 because fewer shares were displayed as available for trading. However, most market participants we interviewed reported they have been able to continue to execute large orders by using electronic trading tools to submit a larger volume of smaller orders and making greater use of alternative trading venues. In ordering U.S. markets to convert to decimal pricing, SEC had several goals. These included making securities pricing easier for investors to understand and aligning U.S. markets’ pricing conventions with those of foreign securities markets. Decimalization appears to have succeeded in meeting these goals. In addition, SEC hoped that decimal pricing would result in lower investor trading costs, as lower tick sizes would spur competition that would lead to reduced spreads. Narrower spreads benefit retail investors because retail size orders generally execute in one trade at one price. Prior to being ordered to implement decimal pricing, U.S. stock markets had voluntarily reduced their minimum ticks from 1/8 to 1/16 of a dollar, and studies of these actions found that spreads declined as a result. Following decimalization and the implementation of the 1-cent tick in 2001, retail investor trading costs declined further as spreads were narrowed even more substantially. To analyze the effects of decimal pricing, we selected a sample of 300 pairs of NYSE-listed and NASDAQ stocks with similar characteristics (like share price and trading activity). We examined several weeks before and after the implementation of decimal pricing and found that spreads declined after decimal prices were implemented and remained low through 2004. Our study considered 12 weeklong sample periods from February 2000 to January 2001 (our predecimalization period) and 12 weeklong sample periods from April 2001 through November 2004 (our postdecimalization period). As shown in figure 1, quoted spreads continued a steady decline on both NYSE and NASDAQ following the implementation of decimal pricing, falling to levels well below those that existed before the conversion to decimal pricing. Our analysis of the TAQ data also found that quoted spreads declined for stocks with varying levels of trading volume. As shown in table 1, quoted spreads declined significantly after decimal pricing began for the most actively traded stocks, those with medium levels of trading volume, and also for those with the lowest amount of daily trading activity, with the average quoted spread falling 73 percent for NYSE stocks and 68 percent for NASDAQ stocks. While the quoted spread measure is useful for illustrative purposes, a better measure of the cost associated with the bid-ask spread is the effective spread, which is twice the difference between the price at which an investor’s trade is executed and the midpoint between the quoted bid and ask prices that prevailed at the time the order was executed. Thus, the effective spread measures the actual costs of trades occurring rather than just the difference between the best quoted prices at the time of the trade. As shown in table 2, effective spreads declined by 62 percent for our NYSE sample stocks and 59 percent for our NASDAQ sample stocks between the periods after decimal pricing was implemented. In addition, several academic and industry studies found similar results. For example, one academic study examined differences in trade execution cost and market quality measures in 300 NYSE stocks and 300 NASDAQ stocks (matched on market capitalization) for several weeks before decimal pricing was fully implemented on NYSE stocks and after both markets converted to decimal pricing. As shown in table 3, the study found that average effective spreads declined by 41 percent for the NYSE stocks and by 54 percent for the NASDAQ stocks from the predecimalization sample period (January 8–26, 2001) to the postdecimalization sample period (April 9–August 31, 2001). As the table also shows, the study found that spreads declined the most for NYSE stocks with the largest market capitalizations and for NASDAQ stocks with the smallest market capitalizations. Similar declines in spreads were also reported in studies that SEC required the various markets to conduct as part of its order directing them to implement decimal pricing. For example, in its impact study, NYSE reported that share-weighted average effective spreads declined 43 percent for all 2,466 NYSE-listed securities trading in the pre- and postdecimalization sample periods the exchange selected. NASDAQ’s study found that effective spreads declined between its sample periods by an average of 46 percent for the 4,766 NASDAQ securities that converted to penny increments on April 9, 2001. In addition, an official at a major U.S. stock market told us that all the research studies that he reviewed on the impact of decimal pricing concluded that spreads narrowed overall in response to the reduction in tick size. Many market participants we interviewed also indicated that retail investors benefited from the narrower spreads that followed decimalization and the adoption of 1-cent ticks. For example, a representative of a firm that analyzes trading activities of large investors told us that investors trading 100 shares are better off following decimalization because small trades can be executed at the now lower best quoted prices. Representatives from two broker-dealers stated that the narrower spreads that prevailed following decimalization meant that more money stayed with the buyers and sellers of stock rather than going to market intermediaries such as brokers-dealers and market makers. Furthermore, the chief financial officer of a small broker-dealer told us that retail investors had benefited from the adoption of the 1-cent tick because their orders can generally be executed with one transaction at a single price unlike those of institutional investors, which are typically larger than the number of shares displayed as available at the best prices. Analysis of the multiple sources of data that we collected generally indicated that institutional investors’ trading costs had declined since decimal prices were implemented. We obtained data from three leading firms that collect and analyze information about institutional investors’ trading costs. These trade analytics firms (Abel/Noser, Elkins/McSherry, and Plexus Group) obtain trade data directly from institutional investors and brokerage firms and then calculate trading costs, including market impact costs, typically for the purpose of helping investors and traders limit costs of trading. These firms also aggregate client data in order to approximate total average trading costs for all their institutional investor clients. Generally, the client base represented in these firms’ aggregate trade cost data is broad enough to be sufficiently representative of all institutional investors. For example, officials at one firm told us that its data captured 80 to 90 percent of all institutional investors and covers trading for every stock listed on the major U.S. stock markets. An official of a major U.S. stock market told us that these firms are well regarded and that their information is particularly informative because these firms measure costs from the point the customer makes the decision to trade by using the price at which stocks are trading at that time, which is data that exchanges and markets generally do not have. Although these firms use different methodologies, their data uniformly showed that costs had declined since decimal pricing was implemented. Our analysis of data from the Plexus Group showed that costs declined on both NYSE and NASDAQ in the 2 years after these markets converted to decimal pricing. Plexus Group analyzes various components of institutional investor trading costs, including the market impact of investors’ trading. Total trading costs declined by about 53 percent for NYSE stocks, falling from about 33 cents per share in early 2001 to about 15.5 cents (fig. 2). For NASDAQ stocks, the decline was about 44 percent, from about 25.7 cents to about 14.4 cents. The decline in trading costs, shown in figure 2, began before both markets implemented decimal pricing, indicating that causes other than decimal pricing were also affecting institutional investors’ trading during this period. An official from a trade analytics firm told us that the spike in costs that preceded the decimalization of NASDAQ stocks correlated to the pricing bubble that technology sector stocks experienced in the late 1990s and early 2000s. An official from another trade analytics firm explained that trading costs increased during this time because when some stocks’ prices would begin to rise, other investors—called momentum investors—would also begin making purchases and drive prices for these stocks up even faster. As a result, other investors faced greater than usual market impact costs when also trading these stocks. In general, trading during periods when stock prices are either rapidly rising or falling can make trading very costly. According to our analysis of the Plexus Group data, market impact and delays in submitting orders accounted for the majority of the decline in trading costs for NYSE stocks and NASDAQ stocks. Together, the reduction in these two cost components accounted for nearly 17 cents per share (or about 96 percent) out of a total decline of about 17.6 cents per share on NYSE. Delay costs declined about 11.2 cents per share in the 2 years following the implementation of decimal pricing and 1-cent ticks on NYSE and market impact costs declining by about 5.8 cents (fig. 3). An SEC economist noted that declines in delay costs may reflect increased efficiency on the part of institutional investors in trading rather than changes in the markets themselves. Figure 3 also shows that market impact and delay costs accounted for all declines to total NASDAQ trading costs. For example, market impact and delay costs declined about 14.1 cents per share between the second quarter of 2001 and the second quarter of 2003. However, at the same time that these cost components were improving, commission charges for NASDAQ stocks were rising. As shown in figure 3, commissions that market intermediaries charged for trading NASDAQ stocks increased about 2.8 cents per share from second quarter of 2001 to second quarter of 2003. Industry representatives told us these increases were the result of the broker-dealers that made markets in NASDAQ stocks transitioning from trading as a principal, in which a portion of the trade’s final price included some compensation for the market maker, to trading as an agent for the customer and charging an explicit commission. Analysis of data from the other two trade analytics firms from whom we obtained data, Elkins/McSherry and Abel/Noser, also indicated that institutional investor trading costs declined following the decimalization of U.S. stock markets in 2001. Because these two firms’ methodologies do not include measures of delay, which the Plexus Group data shows can be significant, analysis of data from these two firms results in trading cost declines of a lower magnitude than those indicated by the Plexus Group data analysis. Nevertheless, the data we analyzed from Elkins/McSherry showed total costs for NYSE stocks declined about 40 percent between the first quarter of 2001 and year-end 2004 from about 11.5 cents per share to about 6.9 cents per share. Analysis of Abel/Noser data indicated that total trading costs for NYSE stocks declined about 30 percent, from 6.9 cents per share to 4.8 cents per share between year-end 2000 and 2004 (fig. 4). Our analysis of these firms’ data also indicated that total trading costs declined for NASDAQ stocks, which appeared to have declined even more significantly than they did for NYSE stocks. For example, our analysis of the Elkins/McSherry data showed that total trading costs for NASDAQ stocks dropped by nearly 50 percent, from about 14.6 cents per share to about 7.4 cents per share, between the second quarter of 2001 when that market decimalized and the end of 2004. Analysis of the Abel/Noser data indicated that total trading costs declined about 46 percent for NASDAQ stocks between the end of 2000 and 2004, falling from 8.7 cents per share to 4.7 cents per share (fig. 5). As our analysis of the Plexus Group data showed, the Elkins/McSherry and Abel/Noser data also indicated that reductions to market impact costs accounted for a vast proportion of overall reductions for NYSE stocks (fig. 6). Analysis of the Elkins/McSherry data indicated that these costs declined by 3.7 cents per share, accounting for about 80 percent of the total fall in trading costs during this period. The 1.1 cent per share reduction in market impact costs identified in the Abel/Noser data represented over half of the total trading cost reductions of 2.1 cents per share for NYSE stocks. Reductions to market impact costs explained the entire decline to total trading costs captured by the Elkins/McSherry and Abel/Noser data for NASDAQ stocks, and the total declines would have been even larger had commissions for these stocks not increased after 2001. Market impact costs declined about 10.6 cents per share (about 78 percent) according to our analysis of the Elkins/McSherry data, and 6.7 cents per share (about 87 percent) according to our analysis of the Abel/Noser data (fig. 7). However, during this period, commissions charged on NASDAQ stock trades included in these firms’ data increased by more than 3 cents per share, representing a more than threefold increase in commissions as measured by Elkins/McSherry and a more than sixfold rise according to Abel/Noser. Data from a fourth firm, ITG, which recently began measuring institutional trading costs, also indicates that such costs have declined. This firm began collecting data from its institutional clients in January 2003. Like the other trade analytics firms, its data is similarly broad based, representing about 100 large institutional investors and about $2 trillion worth of U.S. stock trades. ITG’s measure of institutional investor trading cost is solely composed of market impact costs and does not include explicit costs, such as commissions and fees, in its calculations. Although changes in ITG’s client base for its trade cost analysis service prevented direct period to period comparisons, an ITG official told us that its institutional investor clients’ trading costs have been trending lower since 2003. In attempting to identify all relevant research relating to the impact of decimal pricing on institutional investors, we found 15 academic studies that discussed the impact of decimalization but only 3 that specifically examined institutional investors’ trading costs. As of May 2005, none of these three studies had been published in an academic journal. Two of these studies used direct measures of trading costs, and the other used an indirect measure. Those that relied on more direct measures of these costs found that these costs had declined since the implementation of decimal pricing and 1-cent ticks. The first of these studies analyzed more than 80,000 orders in over 1,600 NYSE-listed stocks that were traded by 32 institutional investors. To measure the change in trading costs after decimal pricing was implemented, this study used data from one of the leading trade analytics firms and computed trading costs over the period from November 28, 2000, to January 26, 2001 (before the change to decimal pricing), and the period from January 30 to March 31, 2001 (after decimal pricing). The study found that institutional trading costs appeared to have declined by about 5 cents per share (or about 11 percent), falling from 44 cents per share to 39 cents per share after NYSE switched to 1-cent ticks. The other study that used direct measures of institutional trading costs examined the trading of over 1,400 NASDAQ stocks. The author of this study obtained data on over 120,000 orders for NASDAQ stocks submitted by institutional investors, which allowed her to calculate the costs of trading orders of more than 9,999 shares before and after NASDAQ’s adoption of 1-cent ticks. Given the potentially large volume of order data, the author studied three sample periods, each consisting of 5 trading days: February 1 through 8, 2001 (before decimalization), and June 18 through 22 and November 26 through 30, 2001 (after decimalization). Trading costs in this study are measured as the difference between an order’s volume- weighted average execution price and a pre-execution benchmark price, the opening midquote (the midpoint between the quoted bid and ask prices). Using the opening midquote benchmark, the author found that average trading costs for orders of 10,000 shares and above fell about 19 cents per share (or about 49 percent), from about 39 cents per share to about 20 cents per share during the 9 months or so after NASDAQ’s adoption of 1-cent ticks. Unlike the other two studies we identified, the third study reported that costs for institutional investors had increased. However, this study relied on an indirect measure of these costs for its analysis. To assess the change in trading costs, the authors of this study examined a sample of 265 mutual funds chosen from a database of mutual funds compiled by Morningstar, an independent investment research firm. These firms were selected using two criteria—investing predominantly in U.S. stocks and having at least 90 percent of assets invested in stocks. However, the study did not obtain these mutual funds’ actual trading data but instead attempted to identify costs by comparing the funds’ daily returns (gain or loss from the prior day’s closing price) to the daily returns of a synthetic benchmark for the periods before and after decimalization, from April 17 through August 25, 2000, and from April 16 through August 24, 2001. After finding that the returns of actively managed mutual funds were generally lower than the returns of the benchmark in the period after decimals were introduced, the authors attributed the lower returns to increases in the trading costs for these funds. Although this is a plausible explanation for these funds’ lower returns, some of the market participants that we spoke with indicated that other factors could also account for the results. For example, officials from a large mutual fund company that had reviewed the study told us that the lower returns may have resulted from the 3-year decline in stock prices in the market. As the value of their assets decline, funds can report higher expenses because their fixed operating costs correspondingly represent a larger portion of a mutual fund’s total costs, which would reduce reported returns. In addition, an academic regarded as an expert in applying technology to the financial markets noted that the lower returns could be the result of many of the funds in the study’s sample having similar holdings that all performed more poorly than those in the benchmark portfolio in the months following decimalization. In addition to analyzing data from trade analytics firms and academic studies, we interviewed 23 institutional investors that represented nearly one-third of assets managed by a ranking of the 300 largest money managers. Representatives for 20 of these firms said that their trading costs had fallen or stayed about the same since decimals were implemented (table 4). As shown in table 4, fifteen of these firms said that their trading costs had declined since decimals were introduced. These firms included large mutual fund companies, pension fund administrators, a hedge fund, and smaller asset management firms, indicating that cost declines in our sample were not limited solely to just larger firms with greater trading resources. For example, a representative of a small money management firm not ranked as one of the 300 largest noted that trading costs had decreased since decimalization. In addition, the president of a hedge fund that was ranked in the lower half of the rankings told us that his firm’s trading costs had declined significantly since 2001. As shown in the table above, 5 of the 23 firms we interviewed said that their costs had remained about the same since decimal pricing was implemented. For example, representatives of one large mutual fund firm that measures its trading costs internally as well as through a trade analytics firm told us that their firm’s transaction costs had not increased since decimal pricing was introduced, but had trended down to flat. Three institutional investors reported higher trading costs. One of these firms, a large mutual fund manager, attributed the increases to heightened levels of volatility following the reduction in tick size. For example, in his view, stock prices tended to trade in a wider daily range since decimals were implemented than they had before. The other two firms included a mutual fund firm and a mid-size asset management firm, with officials from the mutual fund noting that trading had become more involved and that completing trades of similarly sized orders takes longer since the conversion to decimal pricing. In discussing institutional investors’ views on their trading costs since decimal pricing began, we found that the precision with which these firms measured their trading costs varied. Many firms told us that they used outside trade analytics firms, such as Abel/Noser, Elkins/McSherry, ITG, and Plexus Group, to measure their transaction costs. Representatives of some firms and a state pension plan administrator noted that their firms used trade cost analysis tools from more than one trade analytics firm. The head of trading for one firm said that his firm had been using a trade analytics firm to measure their trading costs for 10 years. Some firms said that they had developed in-house capabilities to measure their own transaction costs. These systems appeared to vary in their levels of sophistication. For example, representatives of a large money management firm told us that they had developed a sophisticated cost measurement system that shows them what a trade should cost before it is executed. The system takes into account factors such as the executing broker and the market venue where the trade executes. A managing partner of another firm noted that it measures costs of completed trades in-house, including the bid-ask spreads and the execution prices, and compares them to the volume-weighted average price for trades it executes. Some money managers told us that their firms did not measure their costs for trading. For example, officials from one firm said that while not formally measuring costs on their own, they sometimes were provided with data on the costs of their trades from their own clients who use trade analytics firms to evaluate the costs of using various money managers. Also, another state pension plan administrator told us that while his organization does not currently measure its trading costs, it plans to do so within the next 2 years. In addition to lower spreads and reduced market impact costs, some market participants noted that another measure of market quality—price volatility—had also improved since decimal pricing was implemented. According to some market participants, the smaller 1-cent ticks generally slowed price movement in the markets and narrowed the range of prices at which stocks trade over the course of time, such as a day. For example, a noted expert on market microstructure told us that price volatility has declined since the reduction in tick size because price changes occur in smaller increments. Our own study of NYSE and NASDAQ stocks using TAQ data showed that price volatility has declined since decimal pricing was implemented. To assess the change in volatility for the stocks in our sample, we calculated the percentage change in price for each one hour increment (between 10 a.m. and 4 p.m.) each trading day. We also calculated the percentage change in price for each stock that occurred between 10 a.m. and 4 p.m. For each stock, we also calculated the standard deviation of these percentage changes, which measures how widely the individual price changes are dispersed around the average change, and reported the median (that is the middle) standard deviation. As shown in table 5, the volatility of the price changes in the stocks in our sample decreased for both the hourly percentage change between 10 a.m. and 4 p.m. each trading day and the percentage change from 10 a.m. to 4 p.m. each trading day after decimal prices were implemented. These findings were in agreement with a recently published academic study. However, not all participants attributed the reduced price volatility to decimal pricing. For example, a representative of a trade analytics firm noted that with the Internet boom, investors increased their positions in technology-sector stocks in a hurry and when the prices of these stocks fell—which was coincident with the change to decimal pricing—investors quickly reversed their positions. By selling quickly, these investors incurred greater market impact costs. With the subsiding of this type of trading activity in ensuing years, markets have become calmer, which has made trading less costly. Although some major elements of market quality—trading costs and volatility—have improved since decimal pricing began, another market quality element—transparency—appears to have been negatively affected. The transparency of a market can depend on whether large numbers of shares are publicly quoted as available to buy or sell. The various sources of data we collected and analyzed indicated that after decimal pricing and the 1-cent tick were implemented in 2001, the volume of shares shown as available for sale—or displayed depth—on U.S. stock markets declined significantly. For example, studies required by SEC on the impact of decimal pricing on trading, among other things, on U.S. markets showed that the average number of shares displayed for trading on NYSE and NASDAQ at the best quoted prices declined by about two-thirds between a sample period before the markets converted to decimal pricing and a period soon after the conversion took place (table 6). In addition, our own study of 300 matched pairs of NYSE and NASDAQ stocks found that the liquidity at the best quoted prices declined significantly. According to our analysis, the average number of shares displayed at the best quoted prices fell by 60 percent on NYSE and 34 percent on NASDAQ over the nearly 5-year period between February 2000 and November 2004 (fig. 8). The greatest declines occurred around the time that the markets converted to decimal pricing and 1-cent ticks. In its impact study, NASDAQ attributed declines in the volume of shares displayed at the best prices to the conversion to decimal pricing. The amount of shares displayed as available for trading also declined at prices away from the best quoted prices. For example, the SEC-mandated NYSE impact study shows that the amount of shares displayed for trading within about a dollar of the midpoint between the best quoted prices generally declined to well under half of what it was when the tick size was 1/16 of a dollar. NASDAQ’s own impact study reported that the cumulative amount of shares displayed for trading declined by about 37 percent within a fixed distance equal to twice the size of the average quoted spread from the midpoint between the best quoted prices. This decline in the volume of shares displayed across all prices—called market depth—is particularly significant for institutional investors because they are often executing large orders over multiple price points that are sometimes inferior to the best quoted prices. Various reasons can explain the reduced number of shares displayed at the best prices. First, the amount of shares displayed for trading at the best price likely declined because the decrease in the minimum tick size created more prices at which orders could be displayed. The reduction in tick size increased the number of price points per dollar at which shares could be quoted from 16, under the previous minimum tick size of 1/16 of a dollar, to 100. With more price points available to enter orders, some traders that may have previously priced their orders in multiples of 1/16 to match the best quoted price may now instead be sending orders priced 1, 2, or 3 cents away from the best price, depending on their own trading strategy. As a result, the volume of shares displayed as available at the best price is lower as more shares are now distributed over nearby prices. In addition to fewer shares displayed at the best price, displayed market depth may also have declined because the reduction in tick size reduced incentives to large-order investors to display their trading interest. Since the implementation of penny ticks, market participants said that displaying large orders is less advantageous than before because other traders could now submit orders priced one penny better and execute these orders ahead of the larger orders. This trading strategy, called “penny jumping” or “stepping ahead,” harms institutional investors that display large orders and can increase their trading costs. For example, an investor wants to purchase a large quantity of shares of a stock (e.g., 15,000 shares) and submits an order to buy at a price of $10.00 (a limit order). Another trader, seeing this large trading interest, submits a smaller limit order (e.g., 100 shares) to buy the same stock at $10.01. This smaller order will be executed against the first market order (which are orders executed at the best price currently prevailing at the time they are presented for execution) that arrives. As a result, the investor’s larger order will go unexecuted until that investor cancels its existing order at $10.00 and resubmits it at a higher price. In this case, the investor’s trading costs increase due to price movements that occur in the process of completing a large order (i.e., market impact). The potential for stepping ahead has increased because in a 1-cent tick environment the financial risk to traders stepping ahead of larger displayed orders has been greatly reduced. For example, assume a trader who steps ahead of a larger order offering to buy shares at $10.00 by entering a limit order to buy 100 shares at a price of $10.01 is executed against an incoming market order. However, if the price of the stock appears to be ready to decline, such as when additional orders to sell are entered with prices lower than $10.00, the trader who previously stepped ahead can quickly enter an order to sell the 100 shares back to the large investor whose order is displayed at $10.00. In such situations, the trader’s loss is only one penny per share, whereas in the past, traders stepping ahead would have risked at least 1/16 of a dollar per share. Many market participants we spoke to acknowledged that institutional investors are reluctant to display large orders in the markets following the switch to 1-cent ticks for fear that competing traders would improve the best quoted prices by one penny and drive up prices to execute large orders. The potential that the reduced tick size would increase the prevalence of stepping ahead was acknowledged prior to decimal pricing’s implementation. For example, in 1997 a prominent academic researcher predicted that problems with stepping ahead would increase following decimalization because smaller price increments would make it easier (i.e., cheaper) for professional traders to step in front of displayed orders and that this would result in fewer shares being quoted and less transparency in the markets. However, some market participants we interviewed acknowledged that stepping ahead had been a problem before decimal pricing was implemented. For example, representatives of a hedge fund told us they were worried about getting stepped ahead of if they revealed their interest to trade large amounts of a stock by entering limit orders with large numbers of shares even when ticks were 1/8 and 1/16. An SEC staff person told us that instances of orders being stepped ahead of has increased since the penny tick was implemented, but he did not think that it negated the benefits of decimal pricing overall. Although markets became less transparent following decimalization, institutional investors and traders appear to be able to execute large orders at a lower cost by adapting their trading strategies and technologies. For example, the academic study that studied around 120,000 large orders submitted for NASDAQ stocks found that the average proportion of total order size that was executed (filled) increased slightly from 78 percent before the change to decimal pricing to about 81 percent about 6 months following the change. Similarly, the study found the length of time required to fill orders—measured from the time the order arrived at a NASDAQ dealer to the time of the last completed trade—decreased from about 81 minutes before decimal pricing to about 78 minutes 6 months after. Eight of the institutional investment firms we contacted for this report also provided information about their experiences in completing trades. Of these, officials from seven of the eight told us that their fill rates had either stayed about the same or had increased. An official at one firm noted that the proportion of orders that were completely executed had risen by as much as 10 percent in the period following decimal pricing’s introduction. One of the ways that institutional investors have adapted their trading strategies to continue trading large orders is to break up these orders into a number of smaller lots. These smaller orders can more easily be executed against the smaller number of shares displayed at the best prices. In addition, not displaying their larger orders all at once prevents other traders from stepping ahead. Evidence of this change in investors’ trading strategy is illustrated by the decline in the average executed trade size on NYSE and NASDAQ. As table 7 shows, the average size of trades executed on these markets has declined about 67 percent since 1999 on NYSE and by about 41 percent on NASDAQ. With average trade size down, some market participants noted that at least 4 to 5 times as many trades are required to fill some large orders since decimalization. For example, a representative of a large mutual fund company said that his traders have always broken their funds’ large orders up into smaller lots so that they could trade without revealing their activity to others in the marketplace. Before decimalization, completing an order may have required 10 trades, but following the change to decimal pricing a similar order might require as many as 200 smaller trades. Referring to the increased difficulty of locating large blocks of shares available for trading, one representative of a money management firm stated that “decimalization changed the trading game from hunting elephants to catching mice.” In fact, the number of trades that NYSE reported being executed on its market increased more than fourfold between 1999 and 2004, rising from about 169 million trades to about 933 million trades. To facilitate the trading of large orders while minimizing market impact costs, many market participants said that they had increased their use of electronic trading techniques. Many of these techniques involve algorithmic execution strategies, which are computer-driven models that segment larger orders into smaller ones and transmit these over specified periods of time and trading venues. The simplest algorithms may just break a large order into smaller pieces and route these to whichever exchange or alternative trading system offers the best price. Institutional investors often obtain these algorithms as part of systems offered by broker-dealers and third-party vendors. They may also develop them using their own staff and integrate them into the desktop order management systems they use to help conduct their trading. One of the primary purposes of using these algorithmic trading systems is to conduct trading in a way that prevents other traders from learning that a large buyer or seller is active in the market. Institutional investors want tools that allow them to trade more anonymously to reduce the extent to which others can profit at their expense, such as when other traders, realizing that a large buyer is active, also buy shares, which quickly causes prices to rise, in hopes of selling these now more expensive shares to this large buyer. Several market participants told us that the anonymity that algorithms provide reduces the potential for other traders to learn that a large buyer or seller is active in the market (known as information leakage), thus reducing the likely market impact of executing the entire order. The use of these tools is growing. A 2004 survey conducted by The Tabb Group, a financial markets’ consulting firm, of more than 50 head and senior traders at institutional investor firms reported that over 60 percent of these firms were using algorithmic trading vehicles. The report noted that this widespread adoption rate was higher than anticipated. Many of the market participants we contacted also told us they were actively using algorithms in their trading activities and those that were not currently using algorithms generally indicated that they planned to begin using them in their trading strategies in the near future. In its report, The Tabb Group predicted that algorithmic trading will grow by almost 150 percent over the next 2 years. To locate the additional shares available for trading that are otherwise not displayed, institutional investors are also increasingly using alternative trading venues outside the primary markets, such as NYSE and NASDAQ, to execute their large orders at lower cost. For example, institutional investors are conducting increasing portions of their trading on ECNs. Originally, ECNs were broker-dealers that operated as real-time electronic trading markets by allowing their customers to enter orders for stocks and obtain executions automatically when the prices of the orders entered matched those of orders entered by other customers. Recently, ECNs have entered into formal associations with existing stock exchanges. Use of ECNS has been a growing trend. According to The Tabb Group, 88 percent of the institutional investor firms it surveyed responded that they traded using ECNs. Furthermore, a 2004 survey by Institutional Investor magazine asked the trading staff of institutional investor firms to identify their preferred venues for executing stock trades. The survey reported that three of the top five trading venues for institutional stock trade execution were ECNs. According to data we obtained from a financial markets consulting firm, the share of ECN trading in NASDAQ and NYSE stocks has increased between 1996 and 2003. For example, ECN trading volume increased from about 9 percent of all NASDAQ trading in 1996 to about 40 percent of total NASDAQ trading volume in 2003 (fig. 9). The percent of trading volume for NYSE stocks conducted through ECNs has also increased, though to a much lesser degree than has these organizations’ trading in NASDAQ stocks. According to some market participants, ECNs have been less successful in gaining greater market share in NYSE stocks because of rules that result in most orders being sent to that exchange. For example, one regulation—the trade through rule— requires that broker-dealers send orders to the venue offering the best price, and in most cases NYSE has the best quoted price for its listed stocks. However, in a report issued by a financial market consulting firm, ECN officials called the trade through rule anticompetitive because the rule fails to acknowledge that some investors value the certainty and speed of execution more than they do price. They noted that under current rules, the NYSE specialists have as long as 30 seconds to decide whether to execute an order sent to them or take other actions. During this time, market participants told us that the price of the stock can change and their order may not be executed or will be executed at an undesirable price. On April 6, 2005, SEC approved Regulation NMS (National Market System) which, among other things, limits the applicability of trade through requirements to quotes that are immediately accessible. Institutional investors we spoke with highlighted anonymity, speed, and the quality of the prices they receive as reasons for their increased use of ECNs. The respondents to The Tabb Group survey indicated that their firms used ECNs to reduce market impact costs and to take advantage of lower fee structures. Many market participants we interviewed and studies we reviewed also indicated that trading using ECNs lowered institutional trading costs. According to market participants we interviewed, decimalization accelerated technology innovation, which they believe has been significant in reducing trading costs primarily by providing a means for investors to directly access the markets and reducing the need for intermediation. However, many acknowledged that increasing use of ECNs has been a growing trend since 1997, when SEC implemented rule changes that allowed ECNs to better compete against NASDAQ market makers. Other alternative trading venues that institutional investors are increasingly using to execute their large orders are block trading platforms operated by broker-dealers called crossing networks. These networks are operated by brokers such as ITG, Liquidnet, and Pipeline Trading Systems. Crossing networks generally provide an anonymous venue for institutional investors to trade large blocks of stock (including orders involving tens or hundreds of thousands of shares) directly with other institutional investors. For example, one crossing network integrates its software with the investor’s desktop order management system so that all of the investor’s orders are automatically submitted to this crossing network in an effort to identify a match with another institutional investor. Once a match is identified, the potential buyer and seller are notified, at which time they negotiate the number of shares and price at which a trade would occur. The heads of stock trading for two large money management firms told us an advantage of using crossing networks is that they minimize market impact costs by allowing investors to trade in large blocks without disclosing their trading interests to others in the markets. Also, the chief executive officer of a crossing network noted that the absence of market intermediaries in the negotiation of trades on crossing networks provides the customers’ traders with the ability to control the price and quantity of their executions. However, we were told that crossing networks may not be the preferred strategy for all kinds of institutional orders because orders remain unexecuted if a natural match cannot be found. Crossing networks are gaining in prominence among institutional investors as a destination of choice for trading large quantities of stock. According to The Tabb Group’s survey of head and senior traders, 70 percent of all firms reported using crossing networks. In Institutional Investor’s 2004 survey, Liquidnet, a crossing network established in 2002, ranked second on the list of institutional investors’ favorite venues for trade executions. Despite advances in electronic trading technologies that give institutional investors increased access to markets, some institutional investors continue to use full-service brokers to locate natural sources of liquidity as they did before decimal pricing began. According to institutional investor officials we interviewed, with fewer shares displayed as available for trading and reductions in average trade size, they are more patient about the time required to completely execute (fill) large orders using brokers in this way. In addition, some noted they increasingly use NYSE floor brokers to facilitate the trading of large orders in less-liquid stocks, explaining that floor brokers have information advantages in the current market structure that help to minimize adverse price changes. In addition to increased use of electronic trading, overall market conditions also likely helped lower trading costs for institutional investors. For example, prices on U.S. stock markets began a multiyear downturn around 2000. As stock prices declined, asset managers faced increased pressure to manage costs and boost investment returns. Representatives of all four leading firms we interviewed that analyze institutional investors’ trading activity noted that the declining market that persisted after the implementation of decimal pricing also had led to reduced costs. Representatives of two of these trade analytics firms noted specifically that institutional buyers and sellers appeared more cost sensitive as a result of the 3-year declining stock market, which caused investment returns to decline substantially. This increased the incentive for institutional investors to take actions to lower their trading costs as a way to offset some of the reduced market returns. Although overall securities industry profits have returned to levels similar to those in the past, some market intermediaries, particularly those broker- dealers acting as exchange specialists and NASDAQ market makers, have been significantly affected by the implementation of decimal pricing. Between 2000 and 2004, exchange specialists and NASDAQ market makers generally saw their revenues and profits from stock trading fall, forcing some smaller market intermediaries out of the market. Decimal pricing was not the only force behind these declines, however. Sharp declines in the overall level of prices in the stock market, the growing use of trading strategies that bypass active intermediary involvement, and heightened competition from ECNs and other electronic trading venues have affected revenues and profits. We found that intermediaries were adapting to the new conditions by changing their business practices—for example, by investing in electronic trading devices and data management systems, reducing the size of their trading staffs, or changing how they priced their services. In response to the negative conditions that some believe exist in U.S. stock markets, a proposal has been made to conduct a pilot test of the use of a higher minimum tick for trading. Many of the market intermediaries but fewer than half of the institutional investors we contacted favored this move. The business environment for the securities industry as a whole, which saw reduced revenues after 2000, appears to be improving. The Securities Industry Association (SIA), which represents the broker-dealers holding the majority of assets in the securities industry, has compiled data on all of its member broker-dealers that have conducted business with public customers in the United States over the last 25 years. As shown in figure 10, the data SIA compiles are derived from filings broker-dealers are required to make with the SEC and detail, among other things, revenues and expenses for market activities such as trading in stocks, debt securities, and options and managing assets. SIA’s 2004 data show that industry revenues of $237 billion, while down from the height of the bull market in 2000, are now similar to revenues earned before the unprecedented gains of 2000. In addition, the industry’s total pretax net income of $24.0 billion in 2003 and $20.7 billion in 2004 represent some of the highest levels of pretax industry profits of the past 25 years. Further, our review indicated these improved industry conditions are not only the result of improved performance among the largest firms. By examining the trend in this data after excluding the results for the 25 largest broker-dealers, the revenue and net-income trend for the remaining firms revealed the same pattern of improvement. Despite these improvements, some market intermediaries, such as stock exchange specialists, have been negatively affected by the shift to decimal pricing. Stock exchange specialists buy or sell shares from their own accounts when insufficient demand exists to match orders from public customers directly. The lower spreads that have prevailed since decimal pricing have reduced the income that exchange specialists can earn from this activity. In addition, the number of shares displayed as being available for purchase or sale has declined, leaving specialist firms with less information about market trends and thus less ability to trade profitably. According to NYSE data, between 2000 and 2004 aggregate NYSE specialist revenues declined by more than 50 percent, falling from $2.1 billion to $902 million (table 8). Further, since decimal pricing began, the extent to which specialist firms participate in trades on their own exchanges has been low, falling below predecimalization levels. The participation rate shows the percentage of the total shares traded represented by trades conducted by specialists as part of their obligation to purchase shares when insufficient demand exists or sell shares when insufficient numbers of shares are being offered. After climbing during the first year decimal pricing was implemented, the percentage of trades on NYSE in which NYSE specialists participated declined from 15.1 percent in 2001 to 10.2 percent in 2004 (fig. 11). The trend toward smaller order sizes and more trade executions that have accelerated since the introduction of decimal pricing (as discussed earlier in this report) has also impacted the operating expenses of exchange specialists. The average trade execution size on the NYSE dropped from 1,205 shares per execution in 1999 to 393 shares per execution in 2004, so that specialists now generally process more trades to execute orders than they did before decimal pricing began. This trend toward greater numbers of executions, which many market participants indicated was exacerbated by decimal pricing, has required exchange specialists to absorb additional processing costs and make related investments in more robust data management and financial reporting tools. For example, each trade that is submitted for clearance and settlement carries a fee, paid to the National Securities Clearing Corporation, of between $0.0075 to $0.15 per trade. Several smaller regional exchange specialist firms we spoke with highlighted these kinds of increased operating costs as significant to their ability to continue profitable operations. Additionally, a floor brokerage firm we spoke with said that other charges had contributed to its declining operating performance. These charges included those from clearing firms, which typically charge in the range of $0.20 cents per 100 shares to process trades, and execution fees from exchange specialists related to the processing of more trades and typically paid by floor brokers. As shown in table 9 below, average trade size has declined over the past 6 years as the number of executions on NYSE has risen. As the table shows, volumes have remained relatively consistent since 2002, even though exchange specialists and floor brokers have seen their revenue and profits decline during this period. Decimal pricing has also generally negatively affected the profitability of firms that make markets in NASDAQ stocks. Traditionally, these firms earned revenue by profitably managing their inventories of shares and earning the spread between the prices at which they bought and sold shares. With the reduced bid-ask spreads and declines in displayed liquidity that have accompanied decimal pricing, the ability of broker-dealers to profitably make markets in NASDAQ stocks has been significantly adversely affected. For example, an official from one firm said that penny spreads had severely curtailed the amount of revenues that market makers could earn from their traditional principal trading. Table 10 presents SIA data on all NYSE members, which SIA indicates is often used as a proxy for the entire industry. As the table shows, these firms’ revenues from NASDAQ market making activities, after rising between 1999 and 2000, declined about 73 percent between 2000 and 2004, falling from nearly $9 billion to about $2.5 billion. Firms acting as NASDAQ market makers have also seen their operating expenses rise since decimal pricing began. Officials at one broker-dealer said that because the average trade size is smaller, market makers now generally process more trades to execute the same volume. This increase in the number of executions has required NASDAQ market makers to absorb additional processing and clearing costs. Additionally, the increased number of executions associated with decimal pricing has required some NASDAQ market makers to increase their investments in information technology systems. Table 11 shows the reduced average order size on the NASDAQ market over the past 6 years. Declining revenues and increased operating expenses since the implementation of decimal pricing have encouraged some firms to merge with other entities and forced other smaller market intermediaries out of the market, accelerating a trend toward consolidation among stock exchange specialists and NASDAQ market makers. Generally, to date, two developments have contributed to the decline in the number of specialists: acquisitions of smaller firms by larger entities and, on the regional exchanges, smaller specialist firms and proprietorships leaving the business. As shown in table 12, the number of specialist firms operating on various floor-based stock exchanges has declined significantly in recent years. The number of firms that make markets on NASDAQ has similarly declined. Between 2000, when 491 firms were acting as NASDAQ market makers, and 2004, the number of firms making markets in NASDAQ stocks declined to 258—a drop of more than 47 percent. According to an industry association official, NASDAQ market-making activity is increasingly not a stand-alone profitable business activity with firms but instead is conducted to support other lines of business. For example, an official of a broker- dealer that makes markets in NASDAQ stocks told us that his firm has made no profits on its market-making operations in the last 3 years but continues the activity in order to present itself as a full-service firm to customers. Although fewer firms are now acting as market makers, the overall NASDAQ market has not necessarily been affected. Since 2000, the number of stocks traded on NASDAQ has declined from 4,831 to 3,295, potentially reducing the need for market makers. In addition, some firms that continue to make markets have expanded the number of stocks in which they are active. For example, one large broker-dealer expanded its market-making activities from 500 stocks to more than 1,500. A NASDAQ official told us that with reduced numbers of stocks being traded, the average number of market makers per stock has increased since decimal pricing began. As shown in table 13, our analysis of data from NASDAQ indicated that although the number of NASDAQ market makers has declined, the number of firms making markets in the top 100 most active NASDAQ stocks actually grew between 1999 and 2004. Improved technology has likely helped market makers increase their ability to make markets in more stocks. An official at one market maker we spoke with explained that his firm had invested in systems that automatically update the firm’s price quotes across multiple stocks when overall market prices change, allowing the firm to manage the trading of more stocks with the same or fewer staff. The use of such technology helps explain why the number of market makers per stock has not fallen as the overall number of market-making firms has declined. Although decimal pricing affected market intermediaries’ operations, the changes in these firms’ revenues, profits, and viability are not exclusively related to the reduction in the minimum tick size. One major impact on firms’ revenues since 2000 has been the sharp multiyear decline in overall stock market prices. Securities industry revenues have historically been correlated with the performance of U.S. stock markets (fig. 12). After 5 consecutive years of returns exceeding 10 percent, prices on U.S. stock markets began declining in March 2000, and these losses continued until January 2003. The performance record for U.S. stocks during this period represents some of the poorest investment returns for U.S. stocks over the last 75 years. Because intermediary revenues tend to be correlated with broader stock market returns, as measured by the Standard & Poor’s 500 (S&P 500) Stock Index, many market observers we spoke with told us that the 3-year down market, which coincided with the transition to decimal pricing, contributed to reduced intermediary revenues and profits. The widespread emergence of technology-driven trading techniques, such as algorithmic trading models, has also reportedly affected market intermediaries negatively. These new techniques allow institutional investors, which account for the bulk of stock trading volume, to execute trades with less active intermediary involvement. Although only broker- dealers can legally submit trades for execution on U.S. stock markets, broker-dealers are reportedly only charging around 1 cent per share to transmit orders sent electronically as part of algorithmic trading models, an amount that represents much less revenue than the standard commission of around 5 cents per share for orders broker-dealers execute using their own trading systems and staff. Market intermediaries’ revenues are also reduced by institutional investors increasing use of alternative execution venues such as crossing networks to execute trades. The commissions these venues charge are less than those of traditional broker-dealers, specialists, and market makers. Several market observers said that because crossing networks and algorithmic trading solutions divert order flow from and create price competition for traditional broker-dealers, their increased use is a probable factor in the reduced profitability of exchange specialists, floor brokers, and NASDAQ market makers. The increasing use of ECNs also has also likely reduced the revenues earned by market intermediaries. Several market participants we spoke with told us that the increased number of executions on ECNs, such as Bloomberg Tradebook, Brut, and INET, has reduced the profits of exchange specialists, floor brokers, and NASDAQ market makers. ECN executions are done on an agency/commission basis, typically in the range of 1 to 3 cents per share, compared with traditional broker-dealer execution fees of approximately 5 cents per share. As a result, the activities that lower investors trading costs can result in lower revenues for market intermediaries. However, market participants noted that institutional investors’ use of electronic trading technologies and ECNs had been increasing even before decimal pricing was implemented. We found that in response to the changes brought about by decimal pricing and particularly to changes in institutional investors’ trading behavior, many stock market intermediaries had adapted their business operations by making investments in technology to improve trading tools and data management systems, reducing the size of their trading staffs, and changing the pricing and mix of services they offer. Most exchange specialists, floor brokers, NASDAQ market makers, and the broker-dealer staff that trade stocks listed on the exchanges we spoke with had made investments in new technology since the implementation of decimal pricing. For example, some NASDAQ market makers and listed traders were increasingly using aggregation software to locate pools of liquidity instead of relying on telephone contacts with other broker-dealers as they had in the past. Several intermediaries were also using algorithmic trading solutions more frequently to execute routine customer orders, allowing more time for their staff to work on more complex transactions or the trading of less liquid stocks. Other intermediary firms have responded to the more challenging business environment since 2000 by reducing the size of their trading staffs. Most stock broker-dealer firms we spoke with employed fewer human traders in 2004 than they had before 2001. Senior traders at the firms we spoke with cited reduced profits and the increased number of electronic and automated executions as the primary reasons for the reductions in the number of traders they employed. Consequently, although trades executed by broker-dealers using computer-generated algorithms typically generated lower revenues from commissions than traditional executions, the reduced salary and overhead costs associated with employing fewer traders, we were told, had made it easier for some broker-dealers to maintain viable stock trading operations. We also found that market intermediaries were adapting to the new business environment by modifying the pricing and mix of the services they offered. For example, instead of trading as principals, using their own capital to purchase or sell shares for customers, many NASDAQ market makers have begun acting as agents that match such orders to other orders in the market. Like ECNs, these market makers charge commissions to match buy and sell orders. The agency/commission model provides the benefit of reduced risk for NASDAQ market makers because they were using less of their own capital to conduct trading activity. However, market participants told us that this activity may not generally be as profitable for market makers as traditional principal/dealer trading operations. Other firms had attempted to diversify or broaden their service offerings. For example, a NYSE floor brokerage firm we spoke with was attempting to make up for lost revenues by developing a NASDAQ market-making function. Some firms were also expanding into other product lines. For example, one large NASDAQ market maker we spoke with was attempting to make up for declining stock trading revenue by becoming a more active market maker in other over-the-counter stocks outside those traded on NASDAQ’s National Market System, including those sold on the Over-the-Counter Bulletin Board (OTCBB) market, which trades stocks of companies whose market valuations, earnings, or revenues are not large enough to qualify them for listing on a national securities market like NYSE or NASDAQ. These stocks often trade with higher spreads on a percentage basis than do the stocks listed on the national exchanges. Finally, other firms had moved staff and other resources formerly used to trade stocks to support the trading of other instruments, such as corporate bonds, credit derivatives, or energy futures. The willingness and ability of broker-dealers to assist companies with raising capital in U.S. markets also does not appear to have diminished as a result of decimal pricing. Broker-dealers, acting as investment banks, help American businesses raise funds for operations through sales of stock and bonds and other securities to investors. After the initial public offering (IPO), such securities can be traded among investors in the secondary markets on the stock exchanges and other trading venues. Several market observers had voiced concerns that the reduced displayed liquidity and declining ability of market makers to profit from trading could reduce the liquidity for newly issued and less active stocks. In turn, this loss of liquidity could make it more difficult for firms to raise capital. We found that in 2002 and 2003, U.S. stock underwriting activity was down significantly from recent years (fig. 13). However, as figure 13 shows, although stock IPOs are down from record levels of the bull market of the late 1990s, 247 companies offered stock to the public for the first time in 2004—up from the 2002 and 2003 levels of 86 and 85 companies, respectively. Additionally, stock underwriting activity measured in dollars rose to $47.9 billion in 2004, a level consistent with activity in the late 1990s. Of the market participants that we spoke with, most did not believe that decimal pricing had affected companies’ ability to raise capital in U.S markets, noting that underwriting activity is primarily related to investors’ overall demand for stocks. More IPOs generally occur during periods with strong economic growth and good stock market performance. Institutional investors we spoke with noted that the poor growth of the U.S. economy after 2000 and the associated uncertainty about future business conditions had contributed more than decimal pricing to the reduced level of new stock issues in 2002 and 2003. Others cited the new Sarbanes-Oxley Act corporate governance and disclosure requirements, which can increase the costs of being a public company, as a factor that may be discouraging some firms that otherwise would have to sought to raise capital from filing an IPO. However, one broker-dealer official said that his firm was less willing to help small companies raise capital because of its reduced ability since decimal pricing began to profitably make a market in the new firm’s stock after its IPO. In response to the drop in displayed liquidity and other negative conditions that some believe to exist in the U.S. stock markets, a proposal has been made to conduct a pilot that would test the use of a higher minimum tick for trading, but opinions among the various market participants we spoke with were mixed. The proposal, which was put forth by a senior official at one NYSE specialist firm, calls on SEC to oversee a pilot program that would test a 5-cent tick on 200 to 300 NYSE stocks across all markets. The purpose of the pilot program would be to provide SEC with information it could use to decide whether larger-sized ticks improve market quality in U.S. stock markets. Proponents believe that larger ticks would address some of the perceived negative conditions such as the reduction in displayed liquidity brought about with the change to penny ticks. For example, some proponents anticipate that investors would be more willing to display large orders because larger tick sizes would increase the financial risk of stepping ahead for other traders. Some also expected that market intermediaries would be more willing to trade in less liquid stocks because of the increased potential to profit from larger spreads. Some proponents of a pilot program believed 5-cent ticks would also increase the cost efficiency, speed, and simplicity of execution for large-order investors, especially in less liquid stocks. Most of the market intermediaries we spoke with supported the proposed 5-cent pilot for stocks. Opinions from the representatives of the markets we spoke with were more mixed, with officials from floor-based exchanges supporting the pilot, while officials from two of the electronic markets we spoke with did not support a change and officials from two others supporting the pilot under the belief that larger ticks would benefit less liquid stocks. Of the 23 institutional investors we talked with, 10 indicated support for a proposed 5-cent pilot, 9 did not see a need for such a pilot, and 4 were indifferent or had no opinion. Of those institutional investors who did not see the need to conduct a pilot, most indicated that 5-cent ticks would not increase liquidity in the markets because the negative conditions that are attributed to decimal pricing are more the result of the inefficiencies they believed existed in markets that rely on executing trades manually rather than using technology to execute them automatically. In addition, officials at several firms noted that such a pilot is unnecessary because institutional investors have already adjusted to penny ticks. For example, an official of a very large institutional investment firm noted that the challenges of locating sufficient numbers of shares for trading large orders had already been solved with advances in electronic trading and crossing networks. Some of these investors were also concerned that conducting such a pilot could have negative consequences. For example, one firm noted that having different ticks for different stocks could potentially confuse investors. Also, a trade association official noted that mandating that some stocks trade only in 5-cent ticks could be viewed as a form of price fixing, particularly for highly liquid stocks that were already trading efficiently using a 1-cent tick. An official from a financial markets consulting and research firm noted that if a pilot program were to occur, NASDAQ stocks should be included; this would better isolate the effects of a larger tick size on market quality factors since NYSE appears to be undergoing changes towards a more electronic marketplace, potentially making it more difficult to interpret the study’s results. In addition, some of the 10 institutional investors that supported a pilot of nickel-sized ticks indicated that they saw such ticks as being useful primarily for less-liquid stocks that generally have fewer shares displayed for trading, including smaller capitalized stocks. These proponents told us that 5-cent ticks might increase displayed liquidity for such stocks. In addition, they stated that 5-cent ticks could provide financial incentive for intermediaries to increase their participation in the trading of such stocks, including providing greater compensation for market makers and specialists to commit more capital to facilitate large-order trades. Many also anticipated a reduction in stepping ahead since it would become more costly to do so. SEC staff that we asked about the pilot told us that conducting such a test did not appear to be warranted because, to date, the benefits of penny pricing—most notably the reduction in trading costs through narrower spreads—seem clearly to justify the costs. They also noted that penny pricing does not, and is not designed to, establish the optimal spread in a particular security, which will be driven by market forces. Decimal pricing in U.S. options markets has generally had a more limited impact on the options market than it has on the stock market. Although various measures of market quality, including trading costs and liquidity, have improved in U.S. options markets, factors other than decimal pricing are believed to be the primary contributors. First, the tick size reductions adopted for options trading were less dramatic than those adopted in the stock markets. Second, other factors, including increased competition among exchanges to list the same options, the growing use of electronic trading, and a new system that electronically links the various markets, were seen as being more responsible for improvement in U.S. options markets. Options market intermediaries such as market makers and specialists have had mixed experiences since decimal pricing began, with floor-based firms facing declining revenues and profitability and electronic- based firms seeing increased trading revenues and profitability. As part of a concept release on a range of issues pertaining to the options markets, SEC has sought views on reducing tick sizes further in the options markets by lowering them from the current 5 and 10 cents to one penny. Options market participants were generally strongly opposed to such a move for a variety of reasons, including the possibility that the number of quotes could increase dramatically, overwhelming information systems, and the potential for reduced displayed liquidity. One reason that decimal pricing’s impact on options markets was not seen as significant was that the tick size reductions for options market were not as large as those adopted for the stock markets. Options markets had previously used a minimum tick size of 1/8 of a dollar (12.5 cents) for options contracts priced at $3 and more and a tick size of 1/16 of a dollar (6.25 cents) for options priced at less than $3. After decimal pricing came into effect, these tick sizes fell to 10 cents and 5 cents, respectively—a decrease of 20 percent. This decline was far less than the 84 percent reduction in tick size in the stock market, where the bid-ask spread dropped from 1/16 of a dollar to 1 cent. Studies done by four options exchanges in 2001 to assess the impact of decimal prices on, among other factors, options contract bid-ask spreads did not find that decimal pricing had any significant effect on the spreads for options. Most market participants shared this view. For example, an official of a large market-making firm stated that decimalization in the options market was “a small ripple in a huge pond.” Although decimal pricing’s impact was not seen as significant, various measures used to assess market quality have shown improvements in U.S. options markets in recent years. Unlike for stocks, data on trading costs in options markets was not generally available. For example, we could not identify any trade analytics or other firms that collected and analyzed data for options trading. However, some market participants we interviewed indicated that bid-ask spreads, which represent a measure of cost of trading in options markets, have narrowed since the 1990s. In addition, the studies done by SEC and others also indicated that spreads have declined for options markets. In addition to lower trading costs, liquidity, which is another measure that could be used to assess the quality of the options market, has improved since decimal pricing was implemented. According to industry participants we interviewed, liquidity in the options market has increased since 2001. They noted that trading volumes (which can be an indicator of liquidity) had reached historic levels and that many new liquidity providers, such as hedge funds and major securities firms, had entered the market. As shown in figure 14, options trading volumes have grown significantly (61 percent) since 2000, rising from about 673 million contracts to an all-time high of 1.08 billion contracts in 2004. However, some market participants noted that the implementation of decimal pricing in the stock markets had negatively affected options traders. According to these participants, the reduced number of shares displayed in the underlying stock markets and quote flickering in stock prices had made buying and selling shares in the stock markets and determining an accurate price for the underlying stocks more difficult. As a result, options traders’ and market makers’ attempts to hedge the risks of their options positions by trading in the stock markets had become more challenging and costly. Market participants attributed the improvements in market quality for U.S. options markets not to decimal pricing but to other developments, including the practice of listing options contracts on more than one exchange (multilisting), the growing use of electronic exchanges, and the development of electronic linkages among markets. These developments have increased competition in these markets. Multilisting, one of the most significant changes, created intense competition among U.S. options markets. Although SEC had permitted multilistings since the early 1990s, the options exchanges had generally tended not to list options already being actively traded on another exchange, but began doing so more frequently in August 1999. According to an SEC study, in August 1999, 32 percent of stock options were traded on more than one exchange, and that percentage rose steadily to 45 percent in September 2000. The study also showed that the percentage of total options volume traded on only one exchange fell from 61 percent to 15 percent during the same period. Almost all actively traded stock options are now listed on more than one U.S. options exchange. Multilisting has been credited with increasing price competition among exchanges and market participants. The SEC study examined, among other things, how multiple listings impacted pricing and spreads in the options market and found that the heightened competition had produced significant economic benefits to investors in the form of lower quoted and effective spreads. The study looked at 1-week periods, beginning with August 9 through 13, 1999 (a benchmark period prior to widespread multilisting of actively traded options), and ending with October 23 through 27, 2000 (a benchmark period during which the actively traded options in the study were listed on more than one exchange). During this period, the average quoted spreads for the most actively traded stock options declined 8 percent. Quoted spreads across all options exchanges over this same period showed a much more dramatic change, declining approximately 38 percent. The actual transaction costs that investors paid for their options executions, as measured by effective spreads, also declined, falling 19 percent for options priced below $20 and 35 percent for retail orders of 50 contracts or less. Several academic studies also showed results consistent with SEC’s findings that bid-ask spreads had declined since the widespread multiple listing of the most active options. The introduction of the first all-electronic options exchange in 2000 also increased competition in the options markets. Traditionally, trading on U.S. options markets had occurred on the floors of the various exchanges. On the new International Securities Exchange (ISE), which began operations in May 2000, multiple (i.e., competing) market makers and specialists can submit separate quotes on a single options contract electronically. The quotes are then displayed on the screens of other market makers and at the facilities of broker-dealers with customers interested in trading options, enhancing competition for customer orders. ISE also introduced the practice of including with its quotes the number of contracts available at the quoted price. According to market participants, the additional information benefited retail and institutional investors by providing them with better information on the depth of the market and the price at which an order was likely to be executed. Finally, ISE allowed customers to execute trades in complete anonymity and attracted additional sources of liquidity by allowing market makers to access its market remotely. In response, the four floor-based options exchanges—the American Stock Exchange, Chicago Board Options Exchange (CBOE), the Pacific Exchange (PCX), and the Philadelphia Stock Exchange—also began including the number of available contracts with their quotations and offering electronic trading systems in addition to their existing floor-based trading model. Another new entrant, the Boston Options Exchange (BOX) (an affiliate of the Boston Stock Exchange) also began all-electronic operations in 2004. The result has been increased quote competition among markets and their participants that has helped to further narrow spreads and has opened markets to a wide range of new liquidity providers, including broker-dealers, institutional firms, and hedge funds. Electronic linkages were first introduced to U.S. options markets in 2003, offering the previously unavailable opportunity to route orders among all the registered options exchanges. In January 2003, SEC announced that the options markets had implemented the intermarket linkage plan, so that U.S. options exchanges could electronically route orders and messages to one another. The new linkages further increased competition in the options industry and made the markets more efficient, largely by giving brokers, dealers, and investors’ better access to displayed market information. According to SEC and others, as a result of this development investors can now receive the best available prices across all options exchanges, regardless of the exchange to which an order was initially sent. Intermarket linkages are as essential to the effective functioning of the options markets as they are to the functioning of the stock markets and will further assist in establishing a national options market system. Decimal pricing and other changes in options markets appear to have affected the various types of market intermediaries differently. Representatives of firms that trade primarily on floor-based exchanges told us that their revenues and profits from market making had fallen while their expenses had increased. For example, one options specialist said that his firm’s profitability had declined on a per-option basis and was now back to pre-1995 levels. However, he noted that the cost of technology to operate in today’s market had increased substantially and that adverse market conditions and increased competition were more responsible for his firm’s financial conditions than were decimal prices. The increasingly competitive and challenging environment has also led to continued consolidation among firms that trade on the various options exchange floors. According to data from one floor-based options exchange, the number of market intermediaries active on its market declined approximately 22 percent between 2000 and 2004. Market intermediaries and exchange officials we spoke with noted in particular that the smaller broker-dealer firms that trade options and sometimes have just one or two employees had been the most affected, with many either merging with other firms or going out of business because of their inability to compete in the new trading environment. In contrast, the introduction of electronic exchanges and expanded opportunities for electronic trading at other exchanges has been beneficial for some market intermediaries. Officials of some broker-dealers that trade options electronically told us that their firms’ operations had benefited from the increased trading volume and the efficiency of electronic trading. The officials added that other firms, such as large financial institutions, had increased their participation in the options marketplace. They also noted that the availability of electronic trading systems and the inherent economies of scale associated with operating such systems had attracted new marketplace entrants, including some hedge funds and major securities firms. For example, representatives of ISE and several broker- dealers told us that the ability to trade electronically had encouraged several large broker-dealers that were not previously active in options markets to begin acting as market makers on that exchange. These firms, they explained, were able to enter into the options markets because making markets electronically is less expensive than investing in the infrastructure and staff needed to support such operations on a trading floor. According to market participants we spoke with, these new entrants appeared to have provided increased competition and positively affected spreads, product innovation, and liquidity in the options industry. In 2004, SEC issued a concept release that sought public comments on options-related issues that have emerged since the multiple listing of options began in 1999, including whether the markets should reduce the minimum tick sizes for options from 5 and 10 cents to 1-cent increments. According to the release, SEC staff believed that penny pricing in the options market would improve the efficiency and competitiveness of options trading, as it has in the markets for stocks, primarily by tightening spreads. If lower ticks did lead to narrower spreads for options prices, investors trading costs would likely similarly decline. As of May 2004, SEC has received and reviewed comments on the concept release but has taken no further action. All of the options exchanges and virtually all of the options firms we spoke with, as well as 15 of the 16 organizations and individuals that submitted public comments on SEC’s 1-cent tick size proposal, were opposed to quoting options prices in increments lower than those currently in use (10 and 5 cents, depending on the price of an options contract). One of the primary reasons for this opposition was that trading options contracts in 1- cent increments would significantly increase quotation message traffic, potentially overwhelming the capacity of the existing systems that process options quotes and disrupting the dissemination of market data. For any given stock, hundreds of different individual options contracts can be simultaneously trading, with each having a different strike price (the specified price at which the holder can buy or sell underlying stock) and different expiration date. Because options are contracts that provide their holders with the right to either buy or sell a particular stock at the specified strike price, an option’s value and therefore its price also changes as the underlying stock’s price changes. If options were priced in pennies, market participants said that thousands of new option price quotes could be generated because prices would need to adjust more rapidly to remain accurate than they do using nickel or dime increments. Markets and market participants also expressed concerns that penny pricing would exacerbate an already existing problem for the industry— ensuring that the information systems used to process and transmit price quotations to market participants have adequate capacity. The quotes generated by market makers on the various markets are transmitted by the systems overseen by the Options Price Reporting Authority (OPRA). The OPRA system has been experiencing message capacity issues for several years. In terms of the number of messages per second (mps) that can be processed, the OPRA system had a maximum mps of 3,000 in January 2000. Since then, the processing and transmission capacity of the system has had to be expanded significantly to accommodate the growth in options’ quoting volumes, and as of April 2005, the OPRA system was capable of processing approximately 160,000 mps. Prior to the implementation of decimal pricing in 2001, similar concerns about the impact on message traffic volumes were also raised for stocks, but the magnitude of the anticipated increases were much larger for options. To address the capacity constraints in the options market systems thus far, the administrators of the OPRA system have tried to reduce quotation traffic by having the options exchanges engage in quote mitigation. Quote mitigation requires the exchanges to agree to prioritize their own quotes and trade report message volumes so that the amount of traffic submitted does not exceed a specified percentage of the system’s total capacity. As of April 2005, the OPRA administrators were limiting the volume of messages that exchanges were able to transmit to just 88,000 mps based on requests from the six options exchanges. Two market participants that commented on SEC’s proposal noted that with options market data continuing to grow at a phenomenal rate each year, OPRA would have to continue increasing its current message capacity to meet ongoing demand. If penny quoting were to create even faster growth in the total number of price quotes generated, market participants indicated that options exchanges, market data vendors, and broker-dealers would need to spend substantial sums of money on operational and technological improvements to their capacity and communication systems in order to handle the increased amounts of market data. These costs, they said, would likely be passed on to investors. Another reason that market participants objected to lowering tick sizes for options trading was that doing so would likely reduce market intermediaries’ participation in the markets. Because these intermediaries make their money from the spreads between the bid and offer prices, narrower spreads that would likely accompany penny ticks would also reduce these intermediaries’ revenues and profits. This, in turn, would reduce these firms’ ability and willingness to provide liquidity, especially for options that are traded less frequently. According to the commenters on the proposal and the participants we contacted, intermediaries would likely become reluctant to provide continuous two-sided markets (e.g., offering both to buy and sell options simultaneously) to facilitate trading, since profit potential would be limited by the 80 percent or more reduction in tick size. And because the 1-cent tick could increase the chance of other traders stepping ahead of an order, such intermediaries could become reluctant to display large orders. With the options markets having hundreds of options for one underlying stock, market intermediaries would likely quote fewer numbers of contracts, which would further reduce displayed liquidity, and market transparency. Market participants also raised other concerns about trading in penny ticks for options. For example, they worried that option prices quoted in 1-cent increments would change in price too rapidly, resulting in more quote “flickering.” They also noted that the options market could experience some of the other negative effects that have occurred in the stock markets, including increasing instances of stepping ahead by other traders. SEC staff responsible for options markets oversight told us that they would like to see tick sizes reduced in the options markets as a means of lowering costs to investors. They acknowledged that the benefits of such tick size reductions would have to be balanced with the likely accompanying negative impacts. SEC staff responsible for options markets oversight told us that they would like to see tick sizes reduced in the options markets as a means of lowering costs to investors. They acknowledged that the benefits of such tick size reductions would have to be balanced with the likely accompanying negative impacts. They noted that recent innovations permit a small amount of trading in pennies and that continued innovation and technological advances may lead to approaches more favorable to investors without substantial negative effects. In advocating decimal pricing, Congress and SEC expected to make stock and options pricing easier for the average investor to understand and reduce trading costs, particularly for retail investors, from narrower bid- ask spreads. These goals appear to have been met. Securities priced in dollars and cents are clearly more understandable, and the narrower spreads that have accompanied this change have made trading less costly for retail investors. Although the resulting trading environment has become more challenging for institutional investors, they too appear to have benefited from generally lower trading costs since decimal pricing was implemented. In response to the reduced displayed market depth, institutional investors are splitting larger orders into smaller lots to reduce the market impact of their trading and accelerating their adoption of electronic trading technologies and alternative trading venues. As a result of these adaptations, institutional investors have been able to continue to trade large numbers of shares and at even less total cost than before. However, since decimal pricing was introduced, the activities performed by some market intermediaries have become less profitable. Decimal prices have adversely affected broker-dealers’ ability to earn revenues and profits from their stock trading activities. But one of the goals of decimal pricing was to lower the artificially established tick size, and thus the loss of revenue for market intermediaries that had benefited from this price constraint was a natural outcome. Various other factors, including institutional investors’ adoption of electronic technologies that reduce the need for direct intermediation, can also explain some of market intermediaries’ reduced revenues. Nevertheless, the depressed financial condition of some intermediaries would be of more concern if conditions were also similarly negative for investors, which we found was not the case. In response to the changes since decimal pricing began, a proposal has been made to conduct a pilot program to test higher tick sizes. This program would provide regulators with data on the impacts, both positive and negative, of such trading. However, given that many investors and market intermediaries have made considerable efforts to adapt their trading strategies and invest in technologies that allow them to be successful in the penny tick trading environment, the need for increased tick sizes appears questionable. Although decimal pricing has been a less significant development in U.S. options markets, other factors, such as new entrants and the increased use of electronic trading and linkages, have served to improve the quality of these markets. SEC’s proposal to further reduce tick sizes in the options markets has been met with widespread opposition from industry participants, and many of the concerns market participants raised, including the potential for significant increases in quote traffic and less displayed liquidity, appear to have merit. The magnitude of these potential impacts appears larger than those that accompanied the implementation of penny ticks for stocks. As a result, it is not clear that additional benefits of the narrower spreads that could accompany mandated tick size reductions would be greater than the potentially negative impacts and increased costs arising from greatly increased quote processing traffic. We provided a draft of this report to SEC for comments and we received oral comments from staff in SEC’s Division of Market Regulation and Office of Economic Analysis. Overall, these staff said that our report accurately depicted conditions in the markets after the implementation of decimal pricing. They also provided various technical comments that we incorporated where appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this report. At that time, we will send copies of this report to the Chairman and Ranking Minority Member, Subcommittee on Securities and Investments, Senate Committee on Banking, Housing, and Urban Affairs. We will also send copies of this report to the Chairman, SEC. We will make copies available to others upon request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (202) 512-8678 if you or your staff have any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. To determine the impact of decimal pricing on retail investors, we analyzed data from a database of trades and quotes from U.S. stock markets between February 2000 and November 2004. Appendix II contains a detailed methodology of this analysis. Using this data, we selected a sample of stocks traded on the New York Stock Exchange (NYSE) and the NASDAQ Stock Market (NASDAQ) and calculated how the trading in these stocks had changed between a 1-year period before and an almost 4-year period after decimal pricing began. As part of this analysis, we examined the changes in spreads on these stocks (the relevant measure of trading costs for retail investors). We also undertook steps to assess the reliability of the data in the TAQ database by performing a variety of error checks on the data and using widely accepted methods for removing potential errors from data to ensure its reliability. Based on these discussions, we determined that these data were sufficiently reliable for our purposes. We also reviewed market and academic studies of decimal pricing’s impact on spreads. In addition, we interviewed officials from over 30 broker-dealers, the Securities and Exchange Commission (SEC), NASD, two academics, and five alternative trading venues, eight stock markets, four trade analytics firms, a financial markets consulting and research firm, and four industry trade groups. To analyze the impact of decimal pricing on institutional investors, we obtained and analyzed institutional trading cost data from three leading trade analytics firms—Plexus Group, Elkins/McSherry, and Abel/Noser— spanning from the first quarter of 1999 through second quarter of 2003 from the Plexus Group and from the fourth quarter of 1998 to the end of 2004 from Elkins/McSherry and Abel/Noser—to determine how trading costs for institutional investors responded to decimalization. These firms’ data do not include costs for trades that do not fully execute. To address this issue, we interviewed institutional investors on their experiences in filling large orders. We also undertook steps to assess the reliability of the trade analytics firms’ data by interviewing their staffs about the steps the firms follow to ensure the accuracy of their data. Based on these discussions, we determined that these data were sufficiently reliable for our purposes. To identify all relevant research that had been conducted on the impact of decimal pricing on institutional investors’ trading costs, we searched public and private academic and general Internet databases and spoke with academics, regulators, and market participants. We identified 15 academic studies that met our criteria for scope and methodological considerations. Of these, 3 addressed trading costs for institutional investors and 12 addressed trading costs for retail investors. To determine the impact of pricing on investors’ ability to trade, we interviewed roughly 70 judgmentally selected agencies and firms, including representatives of 23 institutional investors with assets under management ranging from $2 billion to more than $1 trillion. The assets being managed by these 23 firms represented 31 percent of the assets under management by the largest 300 money managers in 2003. In addition, we also discussed the impact on intuitional investors during our interviews with broker- dealers, securities regulators, academics, and alternative trading venues, stock exchanges, trade analytics firms, a financial market consulting and research firm, and industry trade groups. To assess the impact of decimal pricing on stock market intermediaries, we obtained data on the revenues of the overall securities industry from the Securities Industry Association (SIA). SIA’s revenue data come from the reports that each broker-dealer conducting business with public customers is required to file with SEC—the Financial and Operational Combined Uniform Single (FOCUS) reports. We used these data to analyze the trend in revenues for the industry as a whole as well as to identify the revenues associated with making markets in NASDAQ stocks. In addition, we obtained data on the specialist broker-dealer revenues and participation rates and on executed trade sizes from NYSE. For the number of specialist firms participating on U.S. markets, we sought data from NYSE and the other exchanges, including the American Stock Exchange (Amex), the Boston Stock Exchange, the Chicago Stock Exchange, the Pacific Exchange (PCX), and the Philadelphia Stock Exchange (Phlx). We obtained data on the number of market makers and the trend in executed trade size from NASDAQ. We discussed how these organizations ensure the reliability of their data with officials from the organizations where relevant and determined that their data were sufficiently reliable for our purposes. We also discussed the impact of decimals on market intermediaries during our interviews with officials from broker-dealers, securities regulators, alternative trading venues, stock exchanges, trade analytics firms, a financial market consulting and research firm, and industry trade groups, as well as experts from academia. To determine the impact of decimal pricing on the options markets, both investors and intermediaries, we reviewed studies that four U.S. options exchanges, including Amex, Chicago Board Options Exchange (CBOE), PCX, and Phlx, submitted to SEC in 2001 on the impact of decimalization on their markets. We also performed literature searches on the Internet for academic and other studies that examined the impact of decimal pricing on options markets. In addition, we also attempted to identify any sources or organizations that collected and analyzed options trading costs. To determine the impact on intermediaries, we interviewed officials of all six U.S. options exchanges, including Amex, Boston Options Exchange, CBOE, International Securities Exchange, PCX, and Phlx, and various market participants (an independent market maker, designated primary market makers, specialists, a floor broker, hedge funds and a retail investor firm) to ascertain their perspectives on the impact of the conversion to decimalization on them, investors, and the markets. To determine the potential impact of reducing the minimum price tick in the options markets to a penny, we interviewed officials from the option exchanges and market participants. We also reviewed all comment letters that SEC had received on its concept release discussing potential changes in options market regulation, including lowering the minimum tick size in the options markets to a penny. We reviewed those letters posted on SEC’s Web site as of May 4, 2005. Sixteen of these letters specifically commented on the penny-pricing proposal. To assess the impact of decimal pricing, one of the activities we performed was to analyze data from the New York Stock Exchange (NYSE) Trade and Quote (TAQ) database spanning the 5-year period between February 2000 (before the conversion to decimal pricing) and November 2004 (after the adoption of decimal pricing) to determine how trading costs for retail investors changed and how various market statistics changed, such as the average number of shares displayed at the best prices before and after decimalization. Although maintained by NYSE, this database includes all trades and quotes that occurred on the various exchanges and the NASDAQ Stock Market (NASDAQ). Using this database, we performed an event-type study analyzing the behavior of trading cost and market quality variables for NYSE and NASDAQ stocks in pre- and postdecimalization environments. For each of our sample stocks, we used information on each recorded trade and quote (that is, intraday trade and quote data) for each trading day in our sample period. We generally followed the methods found in two recently published academic studies that examined the impact of decimalization on market quality and trade execution costs. In particular, we analyzed the pre- and postdecimalization behavior of several trading cost and market quality variables, including various bid-ask spread measures and price volatility, and we also analyzed quote and trade execution price clustering across NYSE and NASDAQ environments. We generally presented our results on an average basis for sample stocks in a given market in the pre- and postdecimalization periods; in some cases we separated sample stocks into groups based on their average daily trading volume and reported our results so that any differences across stock characteristics could be observed. Our analysis was based on intraday trade and quote data from the TAQ database, which includes all trade and quote data (but not order information) for all NYSE-listed and NASDAQ stocks, among others. TAQ data allowed us to study variables that are based on trades and quotes but did not allow us to study any specific effects on or make any inferences regarding orders or institutional trading costs. Our data consisted of trade and quote activity for all stocks listed on NYSE, NASDAQ, and the American Stock Exchange (Amex) from February 1, 2000, through November 30, 2004, excluding the month of September 2001. We focused on NYSE-listed and NASDAQ issues, as is typical in the literature, since the potential sample size from eligible Amex stocks tends to be much smaller. Our analysis compared 300 matched NYSE and NASDAQ stock pairs over the 12 months prior to decimalization and 12 months selected from the period spanning April 2001 through November 2004. In constructing our sample period, we omitted the months of February and March 2001 from consideration, because not all stocks were trading using decimal prices during the transition period. Because there were a host of concurrent factors impacting the equities markets around the time of and since the transition to decimal pricing, it is unlikely that any of our results can be attributed solely to decimalization. Any determination of statistically significant differences in pre- and postdecimalization trading cost and market quality variables was likely due to the confluence of decimalization and these other factors. Determining the best sample period presented a challenge because decimalization was implemented at different times on NYSE and NASDAQ. The transition to decimal pricing was completed on NYSE on January 29, 2001, while on NASDAQ it was completed on April 9, 2001. In addition, there were selected decimalization pilots on NYSE and NASDAQ prior to full decimalization on each. Researchers who have analyzed the transition to decimal pricing have generally divided up the pre- and postdecimalization sample periods differently depending on the particular focus of their research. Relatively short sample periods too close to the transition might suffer from unnatural transitory effects related to the learning process in a new trading environment, while sample periods farther from the implementation date or longer in scope might suffer from the influence of confounding factors. Analyses comparing different months before and after decimalization (e.g., December 2000 versus May 2001) might suffer from seasonal influences. We extended the current body of research, which includes studies by academic and industry researchers, exchanges and markets, and regulators, by including more recent time periods in our analysis, providing an expanded view of the trend in trade execution cost and market quality variables since 2000. However, to the extent that the influence of other factors introduced by expanding the sample window outweighed any influence of decimalization on trade cost and market quality measures, our results should be interpreted with caution. Our sample period spanned February 2000 through November 2004 (table 14). The predecimalization period included February 1, 2000, through January 19, 2001, and the postdecimalization period included April 23, 2001, through November 5, 2004, excluding September 2001 (due to the effects of the September 11 terrorist attacks). We selected one week from each month, allowing for monthly five-trading day comparisons that avoided holidays and options expiration days as well controlling for seasonality issues. Our predecimalization period consisted of a 1-week sample from each of the 12 months and our postdecimalization period consisted of twelve 1-week sample periods excerpted from April 2001 through November 2004, excluding the month of September 2001. Generally following the methods used by other researchers, we generated our list by including only common shares of domestic companies that were active over our period of interest and that were not part of decimalization pilot programs in effect before January 29, 2001. Specifically, we excluded preferred stocks, warrants, lower class common shares (for example, Class B and Class C shares), as well as NASDAQ stocks with five-letter symbols not representing Class A shares. We then eliminated from consideration stocks with average share prices that were below $5 or above $150 over the February 2000 through December 2000 period. We also eliminated stocks for which there were no recorded trades on 10 percent or more of the trading days, to ensure sufficient data, leaving us with 981 NYSE-listed and 1,361 NASDAQ stocks in the potential sample universe. Our stock samples for the analysis ultimately consisted of 300 matched pairs of NYSE-listed and NASDAQ stocks. The NYSE-listed and NASDAQ stocks were matched on variables that are generally thought to help explain interstock differences in spreads. To the extent that our matching samples of NYSE-listed and NASDAQ stocks had similar attributes, any differences in spreads between the groups should have been due to reasons other than these attributes. The attributes we considered were (1) share price, (2) share price volatility, (3) number of trades, and (4) trade size. For the matching procedure, daily data from February 2000 through December 2000 were used and averages were taken over this sample period. Share price was measured by the mean value of the daily closing price and volatility by the average of the logarithm of the high-low intraday price range. The number of trades was measured by the average daily number of trades, and average trade size was measured as the average daily trading volume. These factors have different measurement units, implying that they could not be directly converted into a single measure of similarity. To develop a combined measure of similarity we first had to standardize the measures of all factors so that their average values and differences in their averages were measured on comparable scales. Once standardized measures of averages and differences were developed, we were able to sum the four measurements into a total measure of similarity and identify matched pairs of stocks. Comparability was assured because all averages and differences were divided by the standard deviation of the measure of each factor on the NYSE. in which the superscripts respectively, and Yfor each—in which stock being matched. In the matching algorithm, each of the attributes was weighted equally. Unlike the matching algorithms in the two aforementioned papers, we divided each stock attribute difference by the sample standard deviation of that attribute for the entire NYSE sample— denoted as normalized relative to the overall NYSE attributes. refer to NYSE and NASDAQ, and denotes the NYSE stock and represent one of the four stock attributes —in order to create unit less measures that were Ultimately, for each NYSE stock we selected the NASDAQ stock with the smallest CMS. Chung et al. (2004) used a sequential matching algorithm as is common in the literature. To start, they considered an NYSE stock and computed its CMS with all NASDAQ stocks; they matched that NYSE stock to the NASDAQ stock with the lowest CMS. Then they considered the next NYSE stock, but the NASDAQ stock that matched the prior NYSE stock was no longer considered among the possible universe of matches for this or any subsequent NYSE stock. The outcome of this type of algorithm is path dependent—the order in which the NYSE stocks are taken influences the ultimate list of unique matches. We employed another method that avoided this path dependence—ensuring an optimal match for each stock—but also allowed for the possibility of duplicate, nonunique NASDAQ matches. For the 981 NYSE-listed stocks, there were 293 NASDAQ stocks that provided the best matches. We chose the 300 best CMS matched pairs, which consisted of 300 NYSE and 186 unique NASDAQ stocks. Of these 186 NASDAQ stocks, 114 were best matches for one NYSE-listed stock, 45 were best matches for two NYSE-listed stocks, 19 were best matches for three NYSE-listed stocks, 5 were best matches for four NYSE-listed stocks, 1 was a best match for five NYSE-listed stocks, and 2 were best matches for seven NYSE-listed stocks. In the subsequent analysis, each NASDAQ stock was weighted according to the number of best matches it yielded. For example, if a NASDAQ stock provided the best match for two NYSE-listed stocks, it was counted twice in the overall averages for NASDAQ. The pairings resulting from the CMS minimization algorithm were well matched. The average share price for the 300 NYSE-listed (NASDAQ matching) stocks was $19.66 ($19.56), the average daily volume was 132,404 (127,107), the average number of trades per day was 121 (125), and the measure of daily volatility was 0.018 (0.018). In terms of average share price, the 300 matching-pair stocks were fairly representative of the full sample of matching stocks, as well as of the potential sample universe of stocks, as illustrated in table 15 and figure 15. However, the resulting matched-pairs sample tended to have more lower-priced stocks. In terms of average daily trading volume, the matched-pairs sample underrepresented higher-volume stocks, which likely biased our results toward reporting larger spreads (see table 16 and fig. 16). Once we had defined our stock sample, to undertake the subsequent analysis we first had to filter the trades and quotes data for each sample stock, which involved discarding records with TAQ-labeled errors (such as canceled trade records and quote records identified with trading halts), identifying and removing other potentially erroneous quotation and trade records (such as stale quotes or trade or quote prices that appeared aberrant), as well as simply confining the data to records between 9:30 a.m. and 4 p.m. We also had to determine the national best bid and offer quotes in effect at any given moment from all quoting market venues—the NBBO quotation. In general, for a given stock the best bid (offer) represents the highest (lowest) price available from all market venues providing quotes to sellers (buyers) of the stock. The NBBO quotes data for a given stock were used to compute quoted bid- ask spreads, quote sizes, and share prices, as well as intraday price volatility for that stock on a daily basis. They were also used independently to document any quote clustering activity in that stock. The trades’ data for a particular stock were used to analyze daily price ranges and trade execution price clustering. For each stock, the trades and NBBO quotes data were used to compute effective bid-ask spreads, which rely on both quotes and trades data. The TAQ Consolidated Quotes (CQ) file covers most activity in major U.S. market centers but does not include foreign market centers. A record in the CQ file represents a quote update originating in one of the included market centers: Amex, the Boston Stock Exchange, the Chicago Stock Exchange, electronic communication networks (ECN) and alternative trading systems (ATS), NASDAQ, the National Stock Exchange, NYSE, the Pacific Stock Exchange, and the Philadelphia Stock Exchange. It does not per se establish a comprehensive marketwide NBBO quote, however. A quote update consists of a bid price and the number of shares for which that price is valid and an offer price and the number of shares for which that price is valid. In general, a quote update reflects quote additions or cancellations. The record generally establishes the best bid and offer prevailing in a given market center. Normally, a quote from a market center is regarded as firm and valid until it is superseded by a new quote from that center—that is, a quote update from a market center supersedes that market center’s previous quotes and establishes its latest, binding quotes. Specifying the NBBO involved determining the best bid and offer quotes available—at a particular instant, the most recent valid bids and offers posted by all market centers were compared and the highest bid and the lowest offer were selected as the NBBO quotes. The national best bid (NBB) and national best offer (NBO) are not necessarily from the same market center or posted concurrently, and the bid and offer sizes can be different. Bessimbinder (2003) outlined a general method for determining the NBBO. First, the best bid and offer in effect for NYSE-listed stocks among individual NASDAQ dealers (as indicated by the MMID data field) was assessed and designated as the NASDAQ bid and offer. Then, the best bid and offer in effect across the NYSE, the five regional exchanges, and NASDAQ were determined and designated as the NBBO quotations for NYSE-listed stocks. For NASDAQ stocks, quote records from NASDAQ market makers reflect the best bid and offer across these participants (collectively classified as “T” in the TAQ data). Competing quotes are issued from other markets (e.g., the Pacific Stock Exchange) as well as NASDAQ’s SuperMontage Automated Display Facility, which reflects the quotes from most ECNs. We required additional details in constructing the NBBO, since quote records from competing market makers and market centers can have concurrent time stamps and there can be multiple quotes from the same market center recorded with the same time stamp. Moreover, identical bid or offer prices can be quoted by multiple market makers. To address these complications, we relied on language offered in SEC’s Regulation NMS proposal, which defined the NBBO by ranking all such identical bids or offers first by size (giving the highest ranking to the bid or offer associated with the largest size) and then by time (giving the highest ranking to the bid or offer received first in time). In our algorithm, the NBB (NBO) is located by comparing the existing bids (offers) from all venues. The NBBO is updated with each instance of a change in the NBB or NBO. Each NBBO quotation was weighted by its duration (i.e., the time for which it was effective) and used to compute a sample week time-weighted average NBBO quotation for the relevant market, which was reported on a volume-weighted (relative to total sample market trading volume) basis. Ultimately, these averages were compared across markets and across pre and postdecimalization periods. The same general techniques were used in computing effective spreads, which were determined by comparing trade executions with NBBO quotations. For analysis of trades data (e.g., in computing price ranges), a simple average over all stocks in a given market was computed. In analyzing volatility, intraday returns were measured for each stock based on continuously compounded percentage changes in quotation midpoints, which were recorded between 10 a.m. and 4 p.m. The standard deviation of the intraday returns was then computed for each stock, and the cross-sectional median across all stocks was taken. In assessing clustering, the frequencies of trades and quotes at pennies, nickels, dimes, and quarters were determined for each market on an aggregate basis. In reporting any differences between the pre- and postdecimalization sample periods in the trade execution cost and market quality measures that we analyzed, statistical significance was assessed based on cross- sectional variation in the stock-specific means. With the exception of volatility measures, statistical significance was assessed using a standard t- test for equality of means. Since average volatility measures do not conform well to the t-distribution, median volatility was reported for each market and the Wilcoxon rank sum test used to assess equality. TAQ data allowed us to study variables that are based on trades and quotes but did not allow us to study any specific effects on or make any inferences regarding orders or institutional trading costs. This is an important limitation because the transition to decimal pricing may have impacted retail traders, whose generally smaller orders tend to be executed in a single trade, differently than institutional traders. Use of TAQ data implicitly assumes that each trade record reflects a unique order that is filled, so our analysis failed to address any impact of a change in how orders are filled and the costs associated with this. We reported the pre- and postdecimalization behavior of quoted bid-ask spreads and effective spreads. Beyond measures of trade execution cost, market quality is multidimensional. Possible adverse effects of decimalization on market quality included increased trade execution costs for large traders, increased commissions to offset smaller bid-ask spreads, slower order handling and trade executions, decreased market depth, and increased price volatility. The TAQ data allowed measurement of quotation sizes and price volatility, which we reported. We also analyzed quote clustering, which reflects any unusual frequency with which prices tend to bunch at multiples of nickels, for example. We generally presented our results on an average basis for a given market in the pre- and postdecimalization periods; we also reported the results for sample stocks grouped by average daily trading volume. Average pre- and postdecimalization bid-ask spreads were calculated in cents per share and basis points (that is, the spread in cents relative to the NBBO midpoint) using the NBBO quote prices. The average spread was obtained in the following way. First, each NBBO quote for a given stock was weighted by the elapsed time before it was updated—its duration—on a given day of a sample week relative to the total duration of all NBBO quotes for that stock in that sample week. Next, the duration-weighted average over the five trading days in that sample period for that stock was used to compute the average across all stocks in a given market for that week; ultimately, a volume-weighted average was computed. For the twelve-sample week period, a volume-weighted average was also computed. The effective bid-ask spread—how close the execution price of a trade is relative to the quote midpoint—is generally considered to be the most relevant measure of trade execution cost, as it allows measurement of trades that execute at prices not equal to the bid or ask. In keeping with standard practice, we measured the effective spread for a trade as twice the absolute difference between the price at which a trade was executed and the midpoint of the contemporaneous NBBO quote. Suppose for example that the NBB is $20.00 and the NBO is $20.10, so that the NBBO midpoint is $20.05. If a trade executes at a price of $20.05 then the effective spread is zero because the trade executed at the midpoint of the spread— the buyer of the stock paid $0.05 per share less than the ask price, while the seller received $0.05 per share more than the bid price. If a trade executes at $20.02 with the same NBBO prices, the effective spread is $0.06—the buyer of the stock paid $0.08 per share less than the ask price, while the seller received $0.02 per share more than the bid price. Effective spreads were computed in cents per share and in basis points. Smaller quote sizes could reflect a decrease in liquidity supply, which in turn could be associated with increased volatility. The size of each NBBO quote was weighted by its duration and used to compute a volume- weighted average over each sample week as well as across all sample weeks. A reduction in the tick size could lead to a decline in liquidity supply, which in turn could create more volatile prices. Intraday returns were measured for each stock based on continuously compounded percentage changes in quotation midpoints, which were recorded on an hourly basis between 10 a.m. and 4 p.m. The continuously compounded return over 6 hours, from 10 a.m. to 4 p.m., was also computed. The standard deviation (a measure of dispersion around the average) of the intraday returns was then computed for each stock, and the cross-sectional median (the middle of the distribution) was taken over all stocks in a given market. As another measure of price volatility, we also considered how a stock’s daily price range (i.e., the highest and lowest prices at which trades were executed) may have changed following the implementation of decimal pricing, as the claim has been made that prices have been moving to a greater degree during the day after decimalization. We computed the equal- weighted average of each stock’s daily price range and then computed the average over all stocks in a given market. To account for potentially varying price levels across the pre and postdecimalization sample periods, we computed the price range in both cents per share as well as relative to the midpoint of the first NBBO quote for each day. Decimalization provides a natural experiment to test whether market participants prefer to trade or quote at certain prices when their choices are unconstrained by regulation. Theory suggests that if price discovery is uniform, realized trades should not cluster at particular prices. The existence of price clustering following decimalization could suggest a fundamental psychological bias by investors for round numbers and that there may be only minor differences between the transactions prices that would prevail under a tick size of 5 cents relative to those observed under decimal pricing. For quotes, according to competing hypotheses in the literature, clustering may be due to dealer collusion, or it may simply be a natural phenomenon—as protection against informed traders, as compensation for holding inventory, or to minimize negotiation costs. For our analysis, we computed the frequency of trade executions and quotes across the range of price points, but we did not attempt to determine the causes of any clustering. Consistent with generally accepted government auditing standards, we assessed the reliability of computer-processed data that support our findings. To assess the reliability of TAQ data, we performed a variety of error checks on data from a random sample of stocks and dates. This involved comparing aggregated intraday data with summary daily data, scanning for outliers and missing data. In addition, since the TAQ database is in widespread use by researchers and has been for several years, we were able to employ additional methods for discarding potentially erroneous data records following widely accepted methods (e.g., we discarded quotation information in which a price or size was reported as negative). We assessed the reliability of our analysis of the TAQ data by performing several executions of the programs using identical and slight modifications of the program coding. Program logs were also generated and reviewed for errors. As discussed in the body of this report, institutional investors’ trading costs are commonly measured in cents per share and basis points (bps). Cents per share is an absolute measure of cost based on executing a single share. Basis points—measured in hundredths of a percentage point—show the absolute costs relative to the stock’s average share price. For example, for a stock with a share price of $20, a transaction cost of $.05 would be 0.25 percent or 25 bps. Costs reported in terms of basis points can show changes resulting solely from changes in the level of stock prices—if the price of the $20 stock falls to $18, the $.05 transaction cost would now be almost 0.28 percent or 28 bps. However, many organizations track costs using basis points, and in this appendix we present the results of our institutional trading cost analysis in basis points. Analysis of the multiple sources of data that we collected generally indicated that institutional investors’ trading costs had declined since decimal prices were implemented. Specifically, NYSE converted to decimal pricing on January 29, 2001, and NASDAQ completed its conversion on April 9, 2001. We obtained data from three leading firms that collect and analyze information about institutional investors’ trading costs. These trade analytics firms (Abel/Noser, Elkins/McSherry, and Plexus Group) obtain trade data directly from institutional investors and brokerage firms and calculate trading costs, including market impact costs (the extent to which the security changes in price after the investor begins trading), typically for the purpose of helping investors and traders limit costs of trading. These firms also aggregate client data so as to approximate total average trading costs for all institutional investors. Generally, the client base represented in aggregate trade cost data is sufficiently broad based that the firm’s aggregate cost data can be used to make generalizations about the institutional investor industry. Although utilizing different methodologies, the data from the firms that analyze institutional investor trading costs uniformly showed that costs had declined since decimal pricing was implemented. Our analysis of data from the Plexus Group showed that costs declined on both NYSE and NASDAQ during the 2 year period after these markets converted to decimal pricing. Plexus Group uses a methodology that analyzes various components of institutional investor trading costs, including the market impact of investors’ trading. Total trading costs declined by about 32 percent for NYSE stocks, falling from about 82 bps to 56 bps (fig. 17). For NASDAQ stocks, the decline was about 25 percent, from about 102 bps to about 77 bps. As can be seen in figure 17, the decline in trading costs began before both markets implemented decimal pricing, which indicates that other causes, such as the 3-year declining stock market, in addition to decimal pricing, were also affecting institutional investors’ trading during this period. An official from a trade analytics firm told us that the spike in costs that preceded the decimalization of NASDAQ stocks correlated to the pricing bubble that technology sector stocks experienced in the late 1990s and early 2000s. An official from another trade analytics firm explained that trading costs increased during this time because when some stocks’ prices would begin to rise, other investors—called momentum investors—would begin making purchases and cause prices for these stocks to move up even faster. As a result, other investors faced greater than usual market impact costs when also trading these stocks. In general, trading during periods when stock prices are either rapidly rising or falling can make trading very costly. According to our analysis of the Plexus Group data, all of the decline in trading costs for NYSE stocks and NASDAQ stocks were caused by decreases in the costs resulting from market impact and delay for orders. Together, the reduction in these two components accounted for 29.1 bps or all of total decline, with delay costs representing 20.6 bps (or about 71 percent) in the approximately 2 years following the implementation of decimal pricing and 1-cent ticks on the NYSE. However, commissions increased 3 bps, which led total trading costs to decline 26.1 bps (fig. 18). Figure 18 also shows that market impact and delay costs account for all declines to total NASDAQ trading costs. For example, market impact and delay costs declined 40.9 bps between the second quarter of 2001 and the second quarter of 2003. However, overall trading costs declined by only 24.4 bps, which is 16.5 bps less than declines in market impact and delay costs. According to Plexus Group data, overall costs would have declined further if not for increases to commission costs for NASDAQ stocks, the only cost component that increased after NASDAQ converted to decimal pricing and 1-cent ticks. As shown in figure 18, commissions that market intermediaries charged for trading NASDAQ stocks increased 16.5 bps from the second quarter of 2001 to the second quarter of 2003. Industry representatives told us these increases reflect the evolution of the NASDAQ brokerage industry from trading as principals, in which the compensation earned by market makers was embedded in the final trade price, to that of an agency brokerage model, in which broker-dealers charge explicit commissions to represent customer orders in the marketplace. Analysis of data from the other two trade analytics firms from which we obtained data, Elkins/McSherry and Abel/Noser, also indicated that institutional investor trading costs varied but declined following the decimalization of U.S. stock markets in 2001. Because these two firms’ methodologies do not include measures of delay, which the Plexus Group data shows can be significant, analysis of data from these two firms results in trading cost declines of a lower magnitude than those indicated by the Plexus Group data analysis. Nevertheless, the data we analyzed from Elkins/McSherry showed total costs for NYSE stocks declined about 20 percent between the first quarter of 2001 and year-end 2004 from about 29 bps to about 24 bps. Analysis of Abel/Noser data indicated that total trading costs for NYSE stocks declined 25 percent from 20 bps to 15 bps between year-end 2000 and 2004 (fig. 19). Our analysis of these firms’ data also indicated that total trading costs declined in basis points for NASDAQ stocks or were flat. For example, our analysis of the Elkins/McSherry data showed that total trading costs for NASDAQ stocks dropped by roughly 13 percent, from about 38 bps to about 32 bps between the second quarter of 2001 when that market decimalized to year-end 2004. Analysis of the Abel/Noser data indicated that total trading costs increased nearly 5 percent for NASDAQ stocks during that period, increasing from 21 bps to 22 bps (fig. 20). This increase in trading cost can possibly be explained by the approximately 50 percent decline in average share price over the period. Similar to Plexus Group data analysis, our analysis of the Elkins/McSherry and Abel/Noser data also indicated that reductions to market impact costs accounted for a vast proportion of overall reductions for NYSE stocks (fig. 21). Analysis of the Elkins/McSherry data indicated that by declining 7.6 bps during this period, reduced market impact accounted for 95 percent of total cost trading declines. The 3 bps reduction in market impact costs identified in the Abel/Noser data represented the entire total trading cost reductions for NYSE stocks. Reductions to market impact costs explain virtually the entire decline to total trading costs captured by the Elkins/McSherry data for NASDAQ stocks and all of the Abel/Noser data for NASDAQ stocks. For Elkins/McSherry and Abel/Noser, such costs would have produced even larger total declines had commissions for such stocks not increased since 2001. Market impact costs declined 22.3 bps (about 64 percent) according to our analysis of the Elkins/McSherry data and 14 bps (about 74 percent) according to analysis of the Abel/Noser data (fig. 22). However, during this period, commissions charged on NASDAQ stock trades included in these firms’ data increased by 16.9 bps, marking approximately a sixfold increase in commissions as measured by Elkins/McSherry and by 15 bps or about a fifteenfold increase according to Abel/Noser. Data from a fourth firm, ITG, which recently began measuring institutional trading costs, also indicates that such costs have declined. This firm began collecting data from its institutional clients in January 2003. Like the other trade analytics firms, its data is similarly broad based, representing about 100 large institutional investors and about $2 trillion worth of U.S. stock trades. ITG’s measure of institutional investor trading cost is solely composed of market impact costs and does not include explicit costs, such as commissions and fees, in its calculations. Although changes in ITG’s client base for its trade cost analysis service prevented direct period to period comparisons, an ITG official told us that its institutional investor clients’ trading costs have been trending lower since 2003. As part of our analysis of the Trade and Quotes database, we also examined how quoted and effective spreads changed as a percentage of stock prices and also examined whether the extent to which quotes clustered on particular prices changed since decimal pricing began. In addition to measuring spreads in cents per share, spreads are also frequently measured in basis points, which are 1/100 of a percent. We found that spreads generally declined when measured in basis points similar to our analysis measured in cents. Reporting spreads in basis points potentially accounts for changes in the general price level of our sample stocks, which could impact our results reported in cents per share. We found that both quoted and effective spreads generally declined when measured relative to quote midpoints as they did when measured simply in cents (see tables 17 and 18). We also analyzed the extent to which quote and trade execution prices cluster at particular price points, a phenomenon known as clustering. Clustering, particularly on multiples of nickels, dimes, and quarters, has been well documented by various researchers, and various reasons are cited to explain why all possible price points are not used with equal frequency. We extended the general body of research to include how clustering may have changed after decimalization, but we do not attempt to explain its causes. We generally found that prices tend to cluster on certain price points—especially on nickel, dime, and quarter multiples—but this tendency has been lessening over time. We provide examples of clustering in national best bid quote prices recorded for our sample of NYSE-listed stocks, but the same general features were found in national best offer quote and trade execution prices for both NYSE-listed and Nasdaq stocks. Figure 23 illustrates quote price clustering (using national best bid prices) over our entire postdecimalization sample period, which included 12 sample weeks from April 2001 through November 2004. Prices are observed generally clustering at nickel increments. We also analyzed how clustering may have changed over time. Using the same data as above, we separated the data by sample week. Our results, displayed in figure 24, depict a general decline in the use of price increment multiples of a nickel. This may suggest that traders have been adapting their strategies to the penny environment and are becoming increasingly comfortable with using various price points, which may be a result of the increased use of electronic trading. It may also be the case that traders are making use of the finer price grid to gain execution priority. In addition to the individuals named above, Cody Goebel, Emily Chalmers, Jordan Corey, Joe Hunter, Austin Kelly, Mitchell Rachlis, Carl Ramirez, Omyra Ramsingh, Kathryn Supinski, and Richard Vagnoni made key contributions to this report. The lowest price at which someone is willing to sell a security at a given time. A basis point is equal to 1/100 of 1 percent. A market in which stock prices decline over a sustained period of time. The obligation of broker-dealers to seek to obtain the best terms reasonably available under the circumstances for customer orders. The difference between the price at which a market maker is willing to buy a security (bid) and the price at which the firm is willing to sell it (ask). The spread narrows or widens according to the supply and demand for the security being traded. The spread is what the market maker retains as compensation (or income) for his/her effort and risk. The highest price at which someone is willing to buy a security at a given time. Represents the purchase or sale of (1) a large quantity of stock, generally 10,000 shares or more or (2) shares valued at $200,000 or more in total market value. An individual or firm who acts as an intermediary (agent) between a buyer and seller and who usually charges a commission. A market in which stock prices rise over a sustained period of time. A contract granting the right to buy a fixed amount of a given security at a specified price within a limited period of time. A fee paid to a broker for executing a trade based on the number of shares traded or the dollar amount of the trade. An individual or firm in the business of buying and selling securities for his or her own account (principal) through a broker or otherwise. The quoting and trading of securities in dollars and cents ($2.25) instead of fractions ($8 1/8). A type of market impact cost that occurs as the result of changes in the price of the stock being traded during the time institutional investors’ portfolio mangers direct their traders to buy and sell stock and the moment these orders are released to brokers. Measures the trading costs relative to the midpoint of the quoted spread at the time the trade occurred. It is defined as twice (to reflect the implied roundtrip cost) the difference between the trade price and the midpoint of the most recent bid and ask quotes. It reflects the price actually paid or received by customers. It is considered a better measure of execution costs than quoted spreads because orders do not always execute exactly at the bid or offer price. An electronic trading system that automatically matches buy and sell orders at specified prices. It is a type of alternative trading system—an automated market in which orders are centralized, displayed, matched, and otherwise executed. An organized marketplace (stock exchange) in which members of the exchange, acting both as brokers and dealers, trade securities. Through exchanges, brokers and dealers meet to execute orders from individual and institutional investors and to buy and sell securities. Is a stock exchange (like the American Stock Exchange and the New York Stock Exchange) where buyers and sellers meet through an intermediary— called a specialist. A specialist operates in a centralized location or “floor” and primarily matches incoming orders to buy and sell each stock. There is only one specialist designated for a firm or several firms who is assigned to oversee the market for those stocks. A member of an exchange who is an employee of a member firm and executes orders, as agent, on the floor of the exchange for their clients. The highest bid and lowest offer being quoted among all the market makers competing in a security. An electronic trading linkage between the major exchanges (stock and option) and other trading centers. The system allows brokers to seek best execution in any market within the system. An organization whose primary purpose is to invest its own assets or those held in trust by it for others and typically buys and sells large volumes of securities. Examples of such organizations include mutual funds, pension funds, insurance companies, and charitable organizations. An order to buy or sell a specified number of shares of a security at or better than a customer-specified price. Limit orders supply additional liquidity to the marketplace. A limit order book is a specialist’s record of unexecuted limit orders. The ease with which the market can accommodate large volumes of securities trading without significant price changes. The stock of a company that is listed on a securities exchange. The numbers of shares available for trading around the best bid and ask prices. The degree to which an order affects the price of a security. A dealer that maintains a market in a given security by buying or selling securities at quoted prices. An order to buy or sell a stated amount of a security at the best price available when the order reaches the marketplace. A market for securities traded “over-the-counter” through a network of computers and telephones, rather than on a stock exchange floor. NASDAQ is an electronic communications system in which certain NASD member broker-dealers act as market makers by quoting prices at which they are willing to buy or sell securities for their own accounts or for their customers. NASDAQ traditionally has been a “dealer” market in which prices are set by the interaction of dealer quotes. Defined as the highest bid and lowest ask across all U.S. markets providing quotes for an individual stock. SEC rules that require (1) the display of customer limit orders that improve certain over-the-counter (OTC) market makers’ and specialists’ quotes or add to the size associated with such quotes (Rule 11Ac1-4 (Display Rule)); (2) OTC market makers and specialists who place priced orders with ECNs to reflect those orders in their published quotes (Quote Rule); and (3) OTC market makers and specialists that account for more than 1 percent of the volume in any listed security to publish their quotations for that security (Mandatory Quote Rule). The cost from delaying execution to lessen market impact, or not be able to make the execution at all, or abandoning part of it because the market has turned against the strategy. Occurs when an order is executed at better than the quoted price. A contract granting the right to sell a fixed amount of a given stock at a specified price within a limited period of time. The highest bid to buy and the lowest offer to sell any stock at a given time. Where a given price quote is only visible for a brief moment on the display screen. Measures the cost of executing a simultaneous buy and sell order at the quoted prices. It is the simplest measure of trade execution cost (or trading cost). One who trades securities for himself/herself or who gives money to any institution, such as a mutual fund, to invest for himself/herself. The federal regulatory agency created by the Securities Exchange Act of 1934 that is responsible for ensuring investor protection and market integrity in the U.S. securities markets. Members of an exchange who handle transactions on the trading floor for the stocks for which they are registered and who have the responsibility to maintain an orderly market in these stocks. They do this by buying or selling a stock on their own accounts when there is a temporary disparity between supply and demand for the stock. The practice of improving the best price by a penny or less in an attempt to gain execution priority. A financial instrument that signifies an ownership position in a company. The smallest price difference by which a stock price can change (up or down). The execution of a customer order in a market at a price that is inferior to a price displayed (or available) in another market. The cost for executing the trade (brokerage commission, fees, market impact). The degree to which trade and quotation information (price and volume) is available to the public on a current basis. A measure of the fluctuation in the market price of a security. The number of shares traded in a security or an entire market during a given period—generally on a daily basis. It is a measure of liquidity in a market. A trading benchmark used to evaluate the performance of institutional traders. It is the average price at which a given day’s trading in a given security took place. VWAP is calculated by adding up the dollars traded for every transaction (price times shares traded) and then dividing by the total shares traded for the day. The theory is that if the price of a buy trade is lower than the VWAP, then it is a good trade. The opposite is true if the price is higher than the VWAP. Securities Markets: Preliminary Observations on the Use of Subpenny Pricing. GAO-04-968T. Washington, D.C.: July 22, 2004. Securities Pricing: Trading Volumes and NASD System Limitations Led to Decimal-Trading Delay. GAO/GGD/AIMD-00-319. Washington, D.C.: September 20, 2000. Securities Pricing: Progress and Challenges in Converting to Decimals. GAO/T-GGD-00-96. Washington, D.C.: March 1, 2000. Securities Pricing: Actions Needed for Conversion to Decimals. GAO/T- GGD-98-121. Washington, D.C.: May 8, 1998.
In early 2001, U.S. stock and option markets began quoting prices in decimal increments rather than fractions of a dollar. At the same time, the minimum price increment, or tick size, was reduced to a penny on the stock markets and to 10 cents and 5 cents on the option markets. Although many believe that decimal pricing has benefited small individual (retail) investors, concerns have been raised that the smaller tick sizes have made trading more challenging and costly for large institutional investors, including mutual funds and pension plans. In addition, there is concern that the financial livelihood of market intermediaries, such as the broker-dealers that trade on floor-based and electronic markets, has been negatively affected by the lower ticks, potentially altering the roles these firms play in the U.S. capital market. GAO assessed the effect of decimal pricing on retail and institutional investors and on market intermediaries. Trading costs, a key measure of market quality, have declined significantly for retail and institutional investors since the implementation of decimal pricing in 2001. Retail investors now pay less when they buy and receive more when they sell stock because of the substantially reduced spreads--the difference between the best quoted prices to buy or sell. GAO's analysis of data from firms that analyze institutional investor trades indicated that trading costs for large investors have also declined, falling between 30 to 53 percent. Further, 87 percent of the 23 institutional investor firms we contacted reported that their trading costs had either declined or remained the same since decimal pricing began. Although trading is less costly, the move to the 1-cent tick has reduced market transparency. Fewer shares are now generally displayed as available for purchase or sale in U.S. markets. However, large investors have adapted by breaking up large orders into smaller lots and increasing their use of electronic trading technologies and alternative trading venues. Although conditions in the securities industry overall have improved recently, market intermediaries, particularly exchange specialists and NASDAQ market makers, have faced more challenging operating conditions since 2001. From 2000 to 2004, the revenues of the broker-dealers acting as New York Stock Exchange specialists declined over 50 percent, revenues for firms making markets on NASDAQ fell over 70 percent, and the number of firms conducting such activities shrank from almost 500 to about 260. However, factors other than decimal pricing have also contributed to these conditions, including the sharp decline in overall stock prices since 2000, increased electronic trading, and heightened competition from trading venues.
DOD’s ADM facility is to specialize in manufacturing biologics, with a focus on producing antibodies and vaccines. Until recently, the manufacture of biologic medical countermeasures has required a single facility to produce a single product (e.g., a vaccine), and extensive cleaning and sterilization of equipment was required to switch from manufacturing one product to another. However, recent technological advancements have made “flexible manufacturing” possible. These technologies include the use of disposable equipment, such as equipment for growing cell cultures in disposable plastic material systems rather than in stainless steel tanks that require more time to clean and sterilize prior to the next use, and the use of modular sterile rooms to allow for the manufacture of multiple products simultaneously within a given facility. In the advanced research and development stage, potential medical countermeasures are further evaluated to demonstrate their safety and efficacy for preventing, diagnosing, or treating disease. Successful products are then available for final development and procurement. As we reported in 2014, DOD is one of several agencies, along with HHS, involved in addressing and countering biological threat agents. As illustrated in figure 1, both DOD and HHS have specific biological medical countermeasure needs, some of which are shared. According to officials with HHS’s Office of the Assistant Secretary for Preparedness and Response and DOD’s Joint Program Executive Office for Chemical Biological Defense (hereafter referred to as DOD’s ADM program office), a driving factor for the establishment of the HHS CIADMs was the H1N1 influenza pandemic of 2009 and the difficulty HHS had ensuring that the United States had an adequate supply of pandemic influenza vaccine as well as other medical countermeasures for emerging infectious diseases that are necessary to protect the public’s health. Driving factors for DOD’s establishment of DOD’s ADM facility were the difficulties experienced in attracting large, experienced pharmaceutical manufacturers to develop and manufacture needed biologic medical countermeasures to mitigate the health effects of biological agents and naturally occurring diseases on armed forces personnel. DOD and HHS commissioned a joint analysis of alternatives for the development of emergency medical countermeasures that was published in June 2009 (hereinafter referred to as the 2009 analysis of alternatives). This analysis was followed by the January 27, 2010, State of the Union Address, in which the President announced the Reinventing the Medical Countermeasure Enterprise Initiative “that will give us the capacity to respond faster and more effectively to bioterrorism or an infectious disease.” The National Security Staff then conducted an interagency strategy and policy review and, in December 2010, The White House called for the Secretary of Defense to, among other things, “establish agile and flexible advanced development and manufacturing capabilities to support the development, licensure, and production of medical countermeasures.” Part of DOD’s strategy to address emerging and genetically modified biological threats was to establish a new capability for advanced development and manufacturing of DOD-unique medical countermeasures, which included the construction of an ADM facility in Alachua, Florida. At about the same time, HHS began to establish its three CIADM capabilities. In figure 2, we provide a timeline of efforts that led to the development of DOD’s and HHS’s respective ADM capabilities. According to officials with DOD’s ADM program office and ADM contractor, the ADM capability comprises more than the physical facility in Alachua, Florida—including, for example, other sites around the continental United States, such as fill and finish facilities, and the ADM contractor’s network of 35 different partner companies that provide services in various areas such as testing and cell or virus banking. Officials from both DOD and HHS said that their departments have coordinated to develop their ADM and CIADM facilities, with agency officials serving on one another’s contract evaluation panels and governance boards. For example, according to the advisory board charter for DOD’s ADM capability, the board consists of officials from several DOD agencies as well as HHS’s Biomedical Advanced Research and Development Authority. DOD officials also noted that they serve on the HHS CIADM steering committee and the Public Health Emergency Medical Countermeasures Enterprise, which have an oversight role for HHS’s CIADMs. DOD officials further noted that the two departments had considered the idea of a joint contract bid until HHS issued its solicitation about 6 months earlier than DOD, since HHS was concentrating on pandemic influenza requirements while DOD was looking for a capability to address a wider range of chemical and biological threats to members of the armed services. DOD addressed each of the required six elements in its October 2016 report to Congress on the department’s ADM facility. Table 1 outlines the information DOD provided. We identified additional information regarding DOD’s ADM capability that may be useful to Congress in its oversight of the program. Moreover, this additional information may be particularly useful as DOD makes decisions on whether and how to renew its contract for 2-year option periods with the private-sector biopharmaceutical company that constructed the ADM facility. DOD stated in its report that it will determine whether to exercise future contract option periods that extend the existing contract for the ADM capability by examining factors including, but not limited to, contractor performance, facility utilization, and urgent and/or emerging requirements. Table 2 summarizes the elements required in the National Defense Authorization Act for Fiscal Year 2016 and the additional information that we analyzed from DOD, HHS, and their contractors regarding information that may be useful to Congress. The following is information that we identified in addition to the information that DOD provided to address each required element. DOD’s report noted, among other things, that the facility is 180,000 square feet and capable of producing up to 1.5 million doses of medical countermeasures within 3 months of a federal government request, with a surge capacity of up to 12 million doses. DOD’s report also stated that the facility produces at a scale that is suitable for DOD’s needs, is capable of complying with Current Good Manufacturing Practices manufacturing at biological safety level (BSL) 3-capable containment, offers surge capability, and has additional room for expansion on site. DOD reported that the ADM facility currently consists of two manufacturing suites with the capability to support up to four production lines, with options for adding up to three additional manufacturing suites. DOD also reported some information about the modular, single-use type of equipment found in the facility. Additionally, DOD’s report stated that the ADM facility contractor, per its contract with DOD, provides additional capability and services through a network of industry partners and through contractor staff not located at the facility in Alachua, Florida. During our review, we identified additional information that serves to clarify the potential for expanding the capabilities and capacity of DOD’s ADM capability. For example, the DOD ADM facility is located on 29 acres of land within a secured perimeter and protected by motion- activated infrared cameras. In discussions with DOD program officials and with the ADM contractor, we learned that two of the additional three manufacturing suites (i.e., suites three and four) could be developed within the current structure of the building at the discretion of DOD, while a fifth manufacturing suite could eventually be built by expanding the building’s perimeter, if needed. According to DOD officials, these additional suites, as well as the existing two manufacturing suites, are compliant with Current Good Manufacturing Practices. The facility currently uses DOD-purchased bioreactors with capacity for up to 500 liters each, although ADM contractor officials informed us that there is enough space in some manufacturing areas for bioreactors with capacity for up to 2,000 liters. A more detailed description of the facility and its DOD-purchased equipment—including photographs of the equipment— can be found in appendix II. Regarding DOD’s inclusion of validated requirements in its report to Congress, DOD reported that the requirement for the ADM capability originated from a memorandum in December 2010 from The White House to the Secretary of Defense. According to DOD officials with the Joint Requirements Office and the Chemical Biological Defense Program, although the requirement for the ADM capability was somewhat unique in its origins, infrastructure projects are normally not validated through the department’s Joint Capabilities Integration and Development System. According to DOD officials, the specific medical countermeasures (e.g., vaccines) produced by the ADM capability are to have a validated requirement through the department’s Joint Capabilities Integration and Development System, while the means of production—such as an ADM capability—will be determined by the program office that manages the acquisition of products to serve as medical countermeasures. Throughout the course of our review, we identified additional information about the requirements process for the ADM capability. DOD officials with the ADM program office told us that the requirement for the ADM capability was validated by the direction of the Secretary of Defense to create such a capability, or was what a DOD official called a “directed requirement.” Upon receipt of the memorandum from The White House, the Deputy Secretary of Defense responded that DOD would align its medical countermeasure efforts with The White House vision for strengthening protection against infections disease, in part by recommending funding starting in fiscal year 2012 to support rapid advanced development of medical countermeasures. According to ADM program officials, this direction was then disseminated through the Office of the Secretary of Defense until it reached DOD’s ADM program office. Direction for creating the ADM capability also is captured in the following documents referring to DOD’s Chemical and Biological Defense Program: DOD Chemical and Biological Defense Program Fiscal Year 2012-2017 Program Strategy Guidance Implementation Plan and the Fiscal Year 2014-2018 Program Strategy Guidance Implementation Plan. DOD included in its report to Congress the program goals and performance metrics articulated in presidential memorandums to establish “agile and flexible advanced development and manufacturing capabilities to support the development, licensure, and production of Medical Countermeasures that address the needs of our military and the Nation.” With respect to performance metrics, DOD has established metrics in the contract for the ADM facility that it monitors periodically in conjunction with the contractor. DOD stated in its report that it will determine whether to exercise future contract option periods that extend the existing contract for the ADM capability by examining factors including, but not limited to, contractor performance, facility utilization, and urgent and/or emerging requirements. The report further states that the performance of the ADM contractor during the facility’s operations will be measured based on its performance against the metrics of individual product (e.g., vaccine) contracts. During our review, we identified additional information regarding DOD’s goals and metrics. For example, in the Acquisition Strategy and Plan for the Advanced Development and Manufacturing Prototype Capability for Medical Countermeasures and the ADM contract’s statement of objectives, we identified program objectives that collectively clarified DOD’s overall program goal for the ADM facility: allowing third parties to mature and provide products to the government by leveraging the ADM capability while ensuring protection of intellectual property; providing streamlined capability that reduces cost and schedule risk; providing capabilities to rapidly respond to chemical, biological, radiological, and nuclear events, as well as emerging and genetically modified infectious diseases, by producing Food and Drug Administration-approved products or the expanded production of existing products; providing strategies for supporting and facilitating the transition of processes and technologies from DOD-affiliated science and technology organizations; and providing assistance and training in drug development and manufacturing. Other information we reviewed addressed the evaluation of the contractor during the “base period” (i.e., the period in which the facility will be built by the contractor and accepted by DOD) and may be useful in demonstrating to Congress that oversight and accountability have been built into this public-private partnership contract. DOD’s ADM contract and discussions with DOD’s ADM program officials indicate that there are multiple metrics by which DOD assesses the performance of the contractor during the construction of the facility. For example, the contract requires the tracking of metrics such as technical performance, work product quality, contract management, and earned value management system data as part of a quality assurance surveillance plan. The ADM contract also requires the contractor to provide a number of reports to DOD on a monthly basis. For example, the contract data requirements list requires the ADM contractor to provide, among other things, a contract work breakdown structure that discusses the elements for which the contractor is responsible and a master government property list, which provides information on government property such as the cost of an item. Additionally, according to the contract, within 30 days following completion of facility validation, an ADM Final Technical Closeout Report must be completed to document the completion of the base period, including the achievement of all milestones and requirements. According to DOD officials, milestones for completion of the ADM facility include: (1) completion of construction activities; (2) installation of equipment in laboratory and clean room spaces; and (3) completion of all commissioning, qualification, and validation activities. With respect to operations and maintenance costs, we identified additional information during our review that may be of use to Congress in its oversight of the program. DOD noted in its report to Congress that the ADM contract at completion is approximately $205 million and that there was neither dedicated funding in fiscal years 2015 and 2016, nor a request for fiscal year 2017 funding for the ADM capability. This contract completion cost includes an initial, fixed fee of approximately $18 million to the contractor, as well as costs associated with planning, architectural design, and the purchase of manufacturing equipment (for a more detailed discussion of items paid for by DOD, see app II). DOD’s report to Congress noted that there are no procurement or operations and maintenance budget line item costs directly associated with the facility in upcoming DOD budget requests and included a discussion of future sustainment payments for the ADM capability. Specifically, DOD’s report acknowledged that under contract options, should DOD exercise them, DOD would provide a sustainment payment to the ADM contractor to ensure that the contractor provides DOD with priority access to the ADM facility. Each contract option is to be for 2 years, with the last contract option available from 2022 through 2024. The sustainment payment for the first contract option period, which began on April 1, 2017, was originally negotiated for approximately $18 million each year, but DOD said in its report to Congress that it was actively renegotiating the terms and amount of the sustainment option before awarding the option to the ADM contractor and anticipated that the payments would be less than the original amount. DOD’s report said that the department will pay sustainment costs for the ADM capability from medical countermeasures programs requiring manufacturing and development activities in the year of budget execution. We reviewed additional information that clarifies the relationship between the annual sustainment payment identified in the ADM contract options and the operations and maintenance costs of the ADM capability, as well as DOD’s budgeting for the sustainment payments. DOD’s sustainment payments for priority access to the ADM capability will be budgeted for as a cost of developing medical countermeasures (e.g., vaccines), according to officials from DOD’s ADM program office, a funding structure similar to the model used with DOD-owned laboratories. For example, DOD’s ADM program officials said that within the Chemical Biological Defense Program, of which they are a part, core DOD laboratories that provide critical infrastructure capabilities supporting the program sustain their capabilities by applying an indirect fee to Chemical Biological Defense Program-resourced projects. ADM program officials further stated that the annual sustainment payments will be used to retain trained personnel and maintain the equipment and systems in a ready state to support medical countermeasures development when program lines are ready to use the capabilities. Based on our discussions with DOD and ADM contractor officials, the total costs to ADM capability contractor Nanotherapeutics, Inc., hereinafter referred to as Nanotherapeutics, to operate and maintain the ADM facility—which are separate from and in addition to the costs in the initial contract with DOD for building the facility—were not fully known at the time of this report and were not fully covered by the DOD-provided sustainment payments. The contractor’s executives told us that they were learning more about the costs of operating the facility as it becomes operational, and believe that overhead costs, such as personnel, may not be as significant as they first believed. According to the ADM contractor’s executives, DOD’s sustainment payments should represent approximately 25 percent of this overhead cost for operating the ADM facility. As noted earlier, DOD is working to renegotiate the amount of the sustainment payments based upon several changes, such as changes in facility size, the number of employees in the facility, and the sale of the contractor’s building and the land for the ADM facility. Further, as the cost-benefit analysis portion of the DOD report noted, the sustainment payments are not fixed at the amount negotiated by DOD and the contractor, but may be reduced through funded work. As the DOD report states, there is some uncertainty about the amount each dollar of funded work will offset a dollar of overhead cost (i.e., the costs covered in part by DOD’s sustainment payments). Nanotherapeutics executives noted, for example, that there can be great variations in the cost of labor and materials for some contracts, although other cost elements remain more fixed. During our review, we learned that the contractor and DOD have taken some initial steps toward bringing additional funded work to support the DOD ADM capability, which may help to reduce DOD’s sustainment payments under the contract options. First, executives from the ADM contractor stated that they were actively seeking additional work from both the federal government and the private sector, and had recently been awarded new contract work through HHS’s National Institute of Allergy and Infectious Diseases. Second, included within the noncompetitive contracting mechanisms discussed in the cost-benefit analysis portion of the DOD report to Congress was Other Transaction Authority. DOD’s ADM program officials informed us that DOD used this authority in April 2016 to establish a consortium through which the department may be able to award some DOD medical countermeasures efforts to the ADM facility while retaining some of the benefits of competition, since the ADM contractor is a member of the consortium. DOD officials explained that because this consortium operates under Other Transaction Authority, it provides DOD with more flexibility to negotiate with contractors and to arrange for some subcontracted work to go through the ADM facility, as well as provide access to industry expertise and collaboration, among other things. DOD officials also expect the consortium to provide its members with a flexible contracting vehicle capable of multiple taskings with a single set of terms and conditions. DOD officials informed us that the ADM capability is likely to receive additional DOD work through the use of the Other Transaction Authority consortium. According to the cost-benefit analysis conducted by the Institute for Defense Analyses, additional DOD work would reduce annual sustainment payments, while increasing time saved by DOD. In its report to Congress, DOD included results from the 2009 analysis of alternatives for the Secretaries of Defense and the Department of Health and Human Services, which informed the federal government’s decision to create both DOD’s ADM capability and HHS’s CIADM capabilities. As summarized in DOD’s report, the 2009 analysis of alternatives attempted to address a gap in the production and manufacturing of medical countermeasures against weapons of mass destruction. In the analysis, The Quantic Group, Ltd., and Tufts Center for the Study of Drug Development focused on three alternative methods of producing medical countermeasures: (1) continuing to contract with private-sector pharmaceutical companies for the production of medical countermeasures, (2) continuing existing methods while strengthening regulatory and sourcing capabilities and gaining enhanced access to development and manufacturing, and (3) building government facilities for the purpose of producing all medical countermeasures. We identified additional information regarding HHS’s CIADM capabilities as an alternative to the DOD ADM capability. As noted earlier, the 2009 analysis of alternatives jointly supported the DOD ADM and the HHS CIADM capabilities. However, since neither the DOD nor HHS capabilities existed at the time of the 2009 analysis (the contracts were signed in 2013 and 2012, respectively), the analysis did not consider HHS’s CIADMs as alternatives for DOD or DOD’s ADM capability as an alternative for HHS. HHS issued a request for contract proposals for the CIADMs in March 2011, 5 months before DOD issued its request for contract proposals for the DOD ADM. However, even though the HHS CIADMs were not analyzed as alternatives to the DOD ADM capability, HHS officials said that DOD could separately contract for medical countermeasures with any of HHS’s CIADMs either independently or through existing HHS CIADM contracts. Additionally, a senior official with DOD’s ADM program office informed us that the program office constantly assesses its portfolio, and maintains awareness of the HHS CIADMs through DOD’s participation in HHS’s Public Health Emergency Medical Countermeasures Enterprise—an interagency body—and the CIADM governing board. Although officials from DOD’s ADM program office stated that the HHS CIADMs are not appropriate for DOD’s needs—with one official noting that they are large dedicated facilities designed primarily to address pandemic influenza threats—the cost-benefit analysis for DOD’s ADM capability conducted by the Institute for Defense Analyses, as well as our own observations, suggest otherwise. Based on discussions with CIADM and HHS officials and some CIADM contractor documents, all three of the HHS CIADMs plan to use flexible manufacturing technologies in at least a portion of their facilities and may be capable of addressing DOD’s flexible manufacturing needs. At least one CIADM official has testified about this capability at a CIADM facility upon completion, as at least 50 percent of the CIADM capabilities will be available for non-HHS projects. Officials from two of the CIADMs informed us that they could potentially address some of DOD’s medical countermeasure manufacturing needs, to include potentially providing priority access to the CIADM capabilities under a contract. In addition, an official from one CIADM informed us that the CIADM’s contractor currently is producing medical countermeasures for DOD. An official with the ADM program office said that DOD is represented on the governing board for the CIADMs and is aware of what HHS is doing there, so CIADM information can be taken into consideration along with ADM performance and utilization metrics as DOD considers future contract extensions for the ADM capability. See appendix III for more information on the HHS CIADM capabilities. In DOD’s report to Congress, the department presented the results of a 2016 independent, DOD-commissioned cost-benefit analysis conducted by the Institute for Defense Analyses. During our review, we identified additional information that may add clarity to various aspects of the cost- benefit analysis. DOD’s contracted analysis compared the cost and benefits, schedule, and performance of continued DOD investment in the DOD-dedicated ADM capability with a set of available alternatives. The cost-benefit analysis also reviewed the results of a study conducted by Tufts University in 2015 to determine whether the “sunk” costs (i.e., costs incurred in the past that will not be affected by any present or future decision) of constructing the DOD ADM facility were of an appropriate magnitude. DOD reported that, per the results of the Institute for Defense Analyses- conducted cost-benefit analysis, with the exception of certain potential benefits that are hard to quantify, the benefit of having a DOD-dedicated ADM capability was largely focused on the priority access to the manufacture of biologic medical countermeasures guaranteed to DOD through the sustainment payments. The cost-benefit analysis quantified this benefit as potentially saving 13 to 28 months of production time over the future years defense program—which captures and summarizes forces, resources, and programs associated with all DOD operations approved by the Secretary of Defense—and 23 to 50 months of production time over the course of current manufacturing production projections for medical countermeasures. The cost-benefit analysis also concluded that this priority access could come at a cost of between $55 million and $76 million over the future years defense program (and between $93 million and $136 million over the course of current manufacturing production projections). The cost-benefit analysis noted that DOD could offset some or all of this cost if the DOD-dedicated ADM facility received sufficient DOD and non-DOD funded work to offset DOD’s annual sustainment payments to the contractor. Our review of the cost-benefit analysis suggests that it can help inform decision makers about the potential economic effects of DOD’s investment. We also identified additional information that would be useful for Congress in evaluating or interpreting the results of the DOD- commissioned cost-benefit analysis. Specifically, we reviewed the ADM cost-benefit analysis using selected key elements, based on economic guidance from the Office of Management and Budget and other sources, to determine whether the cost-benefit analysis provided evidence to decision makers of the potential economic effect of DOD’s continued investment in the ADM capability. Based on this review, we identified the following regarding the cost-benefit analysis: The cost-benefit analysis did not estimate the monetary value of the potential benefits of the DOD-dedicated ADM capability, such as those associated with priority access, making it unclear whether DOD’s continued investment in the ADM capability is economically justified (e.g., whether the benefits exceed the costs). Future costs were not discounted. The analysis did not clearly discuss the baseline that was used to estimate incremental costs and benefits (i.e., Institute for Defense Analyses officials explained to us that the ADM capability was evaluated against a baseline that was not a single facility, but rather a combination of the HHS CIADMs and other similar facilities owned by private contract manufacturing organizations). The analysis assumed that development and manufacturing costs of the DOD-dedicated ADM capability and the alternatives would be roughly comparable, but did not assess some plausible adjustments to this assumption in a sensitivity analysis. According to the Institute for Defense Analyses official overseeing the analysis, data used to develop the estimate of time savings—the primary benefit of having a DOD-dedicated ADM capability, according to the analysis—were anecdotal and were not assessed for reliability due to time constraints; additionally, industry changes limit the usefulness of retrospective studies. where i is the interest rate and t is the number of years from the date of initiation for the program or policy until the given future year. See Office of Management and Budget, Circular A-94 (Oct. 29, 1992). use has only recently become common place. As a result, these changes in the industry would limit the usefulness of retrospective studies because older data and practices are not comparable to current data and practices. Additionally, the Institute for Defense Analyses’ cost-benefit analysis reviewed a study previously conducted by Tufts University in 2015 to determine whether the ADM capability’s sunk costs were of an appropriate magnitude. In its review, the Institute for Defense Analyses concluded that the Tufts University assessment was reasonable, and provided a brief explanation of the Tufts University sunk-cost analysis, stating that the 2015 Tufts University assessment demonstrated that the costs of building the facility were within the expected bounds for the project. The 2015 Tufts University sunk-cost analysis may provide additional information in understanding the degree to which “the manufacturing and privately financed construction” of the DOD ADM facility is justified. We are not making any recommendations in this report. DOD and HHS reviewed a draft of this report and provided us with technical comments, which we incorporated where appropriate. We are sending copies of this report to the appropriate congressional committees, Secretaries of Defense and Health and Human Services; the Under Secretary of Defense for Acquisition, Technology, and Logistics; the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs; the Deputy Assistant Secretary of Defense for Chemical and Biological Defense; the Chairman of the Joint Chiefs of Staff; the Secretary of the Army; and the Directors, Institute for Defense Analyses and Office of Management and Budget. If you or your staff have any questions concerning this report, please contact Joseph W. Kirschbaum at (202) 512-9971 or KirschbaumJ@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This report is a public version of a sensitive report that we issued in May 2017. The Departments of Defense (DOD) and Health and Human Services (HHS) deemed some of the information in our May report to be sensitive, which must be protected from public disclosure. Therefore, this report omits sensitive information about DOD’s advanced development and manufacturing (ADM) facility and HHS’s three Centers for Innovation in Advanced Development and Manufacturing (CIADM) facilities. Although the information provided in this report is more limited, the report addresses the same objectives as the sensitive report and uses the same methodology. In this report, we (1) describe the information that DOD included in its report to address the six elements required by the National Defense Authorization Act for Fiscal Year 2016, and (2) present additional information related to each element that may be useful to Congress in its oversight role regarding DOD’s ADM capability. To address our objectives, we compared the six elements required by the National Defense Authorization Act for Fiscal Year 2016 with DOD’s report to Congress to meet the congressional mandate and with the cost- benefit analysis included in the 2016 DOD-commissioned Institute for Defense Analyses report to DOD that was also submitted to Congress. We reviewed DOD’s report, the cost-benefit analysis conducted for DOD by the Institute for Defense Analyses and incorporated into DOD’s report, and documents from the Institute for Defense Analyses that supported its study. Additionally, we interviewed and obtained documentation from officials from relevant organizations within both DOD and HHS as follows: Office of the Under Secretary of Defense for Acquisition, Technology, Office of the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs Office of the Deputy Assistant Secretary of Defense for Chemical and Biological Defense/Chemical and Biological Defense Program Joint Science and Technology Office for Chemical and Biological Defense Joint Requirements Office for Chemical and Biological Defense Office of the Assistant Secretary for Acquisition, Logistics, and Office of the Deputy Chief of Staff for Programming (G-8) Joint Program Executive Office for Chemical and Biological Defense Medical Countermeasure Systems Joint Project Manager Federally Funded Research and Development Center: Department of Health and Human Services: Office of the Assistant Secretary for Preparedness and Response Biomedical Advanced Research and Development Authority Public Health Emergency Medical Countermeasures Enterprise Additionally, we conducted site visits to compare the DOD ADM facility with the HHS CIADM facilities. Specifically, we visited the DOD ADM facility operated by Nanotherapeutics in Alachua, Florida, and two of HHS’s three CIADM facilities—the CIADMs operated by Texas A&M University System in College Station, Texas, and by Emergent BioSolutions, Inc., in Baltimore, Maryland. We also obtained relevant documentation regarding all three contract organizations, about their facilities, relevant technologies, and their contracts with DOD and HHS. Due to the sensitive nature of the contract negotiations underway at the time of our audit work, we were unable to visit or otherwise meet with officials from HHS’s third CIADM facility in Holly Springs, North Carolina, which at the time was contracted to Novartis Aktiengesellschaft. In lieu of this site visit, we met with senior officials from HHS’s Office of the Assistant Secretary for Preparedness and Response to discuss the North Carolina CIADM facility. In late December 2016, HHS informed us that bioCSL/Seqirus had become recognized by the federal government as the owner and operator of the HHS CIADM facility in Holly Springs, North Carolina. We compared the information we obtained through these visits with information from DOD’s October 2016 report to Congress with the initial criteria laid out in the National Defense Authorization Act for Fiscal Year 2016. To further assess the extent to which DOD had conducted an independent cost-benefit analysis of the ADM facility, we reviewed the cost-benefit analysis conducted for DOD by the Institute for Defense Analyses using key characteristics of an economic analysis based on principles and guidance from the Office of Management and Budget (e.g., Circular A-94) and other sources. Such key characteristics include: (1) objective and scope, (2) alternatives, (3) analysis of effects, (4) sensitivity analysis, and (5) documentation. For example, for the objective and scope element, we examined the extent to which the analysis clearly stated its objective and the question that it intended to address. For the alternatives characteristic, we examined the extent to which the analysis considered all relevant alternatives, including, that of no action. For the analysis of effects characteristic, we examined the extent to which analysis quantified and assigned a monetary value to the benefits and costs using the concept of opportunity cost. For the sensitivity analysis characteristic, we examined the extent to which the analysis explicitly addressed how plausible adjustments to each important analytical choice and assumption affected the estimates of benefits and costs. Finally, for the documentation characteristic, we examined the extent to which the analysis was clearly written, with a plain language summary and transparent tables that describe the data used and the results, and a conclusion that is consistent with the results. In addition, we interviewed DOD and Institute for Defense Analyses officials to obtain information about the analysis. Further, we interviewed officials from DOD, HHS, the DOD ADM facility, and two of the three HHS CIADMs to obtain information about medical countermeasures manufacturing facilities. The performance audit upon which this report is based was conducted from June 2016 to May 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and concluding observations based on our audit objectives. We subsequently worked with DOD and HHS in June 2017 to prepare this unclassified version of the original sensitive report for public release. This public version was also prepared in accordance with these standards. The Department of Defense (DOD) advanced development and manufacturing (ADM) facility is a 180,000 square-foot biologics ADM facility located in Alachua, Florida. It was created in 2013 through a public-private partnership between DOD and Nanotherapeutics, Inc., a private-sector biopharmaceutical company hereinafter referred to as Nanotherapeutics. According to ADM program office and contractor officials, Nanotherapeutics, paid for the construction of the building and DOD paid for the design and equipment. Upon completion of the base period (i.e., the period in which the facility will be built by the contractor and accepted by DOD) for DOD’s contract with Nanotherapeutics, DOD is to have priority access to the facility in exchange for an annual sustainment payment (paid monthly, according to ADM contractor officials) if the department chooses to exercise the optional contract periods. Figure 3 shows an external view of DOD’s ADM facility. The facility has two biological safety level (BSL)-3 manufacturing suites compliant with Current Good Manufacturing Practices, with a total of four production lines. It sits within a secured perimeter monitored by motion- activated infrared cameras. Some initial capabilities came online in August 2016, and DOD officials said that the facility became fully operational in March 2017. The facility was constructed with potential expansion in mind. The facility includes an unfinished space where—according to ADM program office and contractor officials—two additional manufacturing suites can be built with DOD’s permission. Further, the facility sits on an approximately 29-acre site that provides room for the expansion of the building, a portion of which may be used for an additional manufacturing suite, according to an ADM program official. According to the contractor, expansion into the unfinished interior space is solely at the discretion of DOD, which owns the space, while Nanotherapeutics has the right to choose to expand the building at its own initiative, without DOD approval. Two images of DOD’s ADM facility were redacted because DOD deemed the images to be sensitive and for official use only. According to representatives from Nanotherapeutics, the facility has two separate outside electricity feeds for redundancy, and has a backup generator that can meet the facility’s electricity needs for up to 4 days (see fig. 4). The BSL-3 area has its own independent, high-efficiency particulate air-filtered, air-handling systems. The facility has a chilled water generator, as well as water purifying systems that include a system to provide purified water and another system to provide and dispense water for injection, used in the manufacturing of drug products and, according to Nanotherapeutics officials, DOD owns the manufacturing equipment (see fig. 5) as well as some building infrastructure, such as the facility’s heating, ventilation, and air conditioning systems. The facility employs single-use technology, in the form of the GE Healthcare LifeSciences’ FlexFactory biomanufacturing platform, to provide more flexible manufacturing that reduces downtime between production runs. The facility can support manufacturing from 4.5 liters to multiple 1,000-liter production lines and uses 50- to 500-liter bioreactors. Nanotherapeutics officials told us that, although the facility is advertised to handle up to 1,000-liter bioreactors, the manufacturing space can handle 2,000-liter bioreactors in certain areas with taller ceiling spaces. Figure 6 shows a bioreactor (bottom right), a device in which living cells synthesize useful substances; a fermentor (left), used in the production of biologics to cultivate microorganisms, such as bacteria; and an autoclave (top right) for steam sterilization through the exposure of items to a certain temperature or pressure for a specified period of time. The autoclave shown below is used to minimize cross-contamination in quality control testing. In March 2016, Nanotherapeutics sold the property associated with the ADM facility to a real estate investment trust, renting the property back from the trust under a 15-year lease. The sales agreement does not include DOD-owned property at the location, which—according to ADM contractor officials—includes the building’s heating, ventilation, and air conditioning systems. According to Nanotherapeutics executives, this sale-and-leaseback was executed to reduce the financial costs to the contractor resulting from the debt associated with building the facility. The Department of Health and Human Services (HHS) has three Centers for Innovation in Advanced Development and Manufacturing (CIADM) facilities located in Texas, Maryland, and North Carolina. The CIADMs are intended to support HHS’s flexible manufacturing of medical countermeasures by providing: (1) surge capacity for manufacturing the pandemic influenza vaccine; (2) core services for the development of chemical, biological, radiological, and nuclear medical countermeasures; and (3) workforce training. The three HHS CIADMs are public-private partnerships between the federal government and contractors, with contracts that involve cost sharing between HHS and each contractor during each contract’s initial phase, or “base period” (i.e., the period in which the facility will be built by the contractor and accepted by HHS). According to HHS officials, though there are commonalities, HHS negotiated each CIADM contract separately, and so each has different terms. The HHS CIADMs may serve as alternatives for the Department of Defense (DOD) advanced development and manufacturing (ADM) capability once the CIADMs achieve readiness, according to DOD and HHS officials. The following is contractor and cost information for HHS’s three CIADMs in Texas, Maryland, and North Carolina. Some details about the three CIADMs were redacted because HHS deemed the information to be sensitive and for official use only. Location: Contractor: Cost to HHS: Cost to Contractor: Base period ends: College Station, Texas Texas A&M University System $176.7 million $108.9 million December 31, 2017 See figure 7 for a photograph of the Texas A&M facilities. Location: Contractor Cost to HHS: Cost to Contractor: Base period ends: Baltimore, Maryland Emergent BioSolutions, Inc. $163.2 million $58.6 million June 14, 2020 Figure 8 shows the Emergent CIADM as it should look upon its completion in 2017. Emergent informed us that the company is interested in DOD medical countermeasures contracts. An Emergent official noted that the company already produces an auto-injector and several other products for DOD. Emergent also informed us that the company has interest in providing priority access to DOD, though Emergent officials told us that this interest would depend on the specifics of DOD’s needs, and the compensation DOD is willing to provide in exchange for that priority access. Cost to HHS: Cost to Contractor: Base period ends: Holly Springs, North Carolina Originally Novartis AG, as of December 2016, bioCSL/Seqirus $59.8 million $26.3 million December 31, 2016 The North Carolina CIADM was created out of a partnership between HHS and Novartis AG (hereafter referred to as Novartis), an international pharmaceutical manufacturer headquartered in Switzerland. Costs and ownership were shared between HHS and Novartis; HHS officials informed us that the government has a 40-percent stake in the facility. During our review, we were informed by HHS officials that HHS was involved in sensitive contract negotiations involving the CIADM following the sale of Novartis’ influenza vaccine business to CSL Limited, an Australian pharmaceutical manufacturer. As such, we discussed this facility only with HHS officials rather than speaking with officials from—or visiting—the North Carolina CIADM facility. In December 2016, HHS officials informed us that HHS had resolved its CIADM contract negotiations with Novartis AG and bioCSL/Seqirus. Seqirus is now recognized by the federal government as the owner and operator of the HHS CIADM facility in Holly Springs, North Carolina. In addition to the contact named above, GAO staff who made key contributions on this report include Mark A. Pross, Assistant Director; Michele Fejfar; Ashley Grant; Timothy Guinane; Mae Jones; Amie Lesser; Bethann E. Ritter Snyder; Sabrina Streagle; Paola Tena; and Edwin Yuen.
DOD has long expressed concerns about its ability to acquire and maintain the capability to research, develop, and manufacture medical countermeasures (e.g., vaccines) against biological warfare threat agents, toxins, and endemic diseases. In 2013, DOD partnered with a private-sector biopharmaceutical company to develop an ADM facility with the capability to use disposable equipment enabling timely changes in a production line for medical countermeasures. The facility was fully operational in March 2017, and DOD can now renew its contract for 2-year periods through 2024. Congress included a provision in the National Defense Authorization Act for Fiscal Year 2016 that DOD, among other things, submit a report to Congress addressing six required elements regarding DOD's ADM facility. DOD submitted its report in October 2016. The act also contained a provision that GAO review the report. GAO describes (1) the information that DOD included in its report to address the six required elements and (2) presents additional information related to the elements that may be useful to Congress in its oversight role. GAO compared DOD's report and cost-benefit analysis with the legislatively required elements and analyzed documents from DOD, HHS, and their private-sector partners. This is a public version of a sensitive report issued in May 2017. Information DOD and HHS deemed sensitive has been omitted. The Department of Defense (DOD) included in its October 2016 report to Congress information that addressed each of the six required elements regarding the department's public-private partnership to construct a facility with an advanced development and manufacturing (ADM) capability. In its report to Congress, DOD addressed the six elements that included, among other things: (1) a description of the ADM facility and its capabilities and an explanation of the origin of the ADM capability requirement; (2) information on some of the program goals, high-level performance metrics, and estimated completion costs along with a statement that DOD is not requesting procurement or operations and maintenance funds in the future years defense program for the ADM facility and that sustainment costs will come from existing medical countermeasure programs; (3) a copy of a 2009 analysis of alternatives conducted for the Secretaries of Defense and Health and Human Services (HHS) that DOD stated justifies the ADM capability; (4) and (5) combined, an independent analysis of the incremental cost and benefits, schedule, and performance of continued DOD investment in its ADM facility; and (6) the department's medical countermeasures production plans for the ADM facility. GAO identified additional information related to these elements that may be useful for congressional oversight. This information may be particularly useful as DOD decides whether and how to renew its contract for 2-year option periods with the contractor that constructed the ADM facility. First, DOD's sustainment payments for priority access to the ADM capability will be budgeted as a cost of developing medical countermeasures (e.g., vaccines), a funding structure similar to the model used with DOD-owned laboratories, according to DOD officials. Second, discussions with officials indicate that the total costs to the ADM capability contractor to operate and maintain the ADM facility, which are separate from and in addition to the costs in the initial contract with DOD for building the facility, were not fully known at the time of DOD's report and are not fully covered by the DOD-provided sustainment payments. However, GAO learned that the contractor and DOD have taken some initial steps toward bringing additional funded work to the DOD ADM capability, which may help to reduce DOD's sustainment payments under the contract options. Third, the three HHS facilities were not analyzed as alternatives to the DOD ADM facility, although HHS officials said that DOD could separately contract for medical countermeasures with any of HHS's facilities, either independently or through existing HHS contracts. Officials from DOD's ADM program office stated that the HHS facilities are not appropriate for DOD's needs—because they are large dedicated facilities designed primarily to address pandemic influenza threats. However, an official from one of the three HHS facilities informed us that they currently produce medical countermeasures for DOD. An official with the ADM program office said that DOD is represented on the governing board for the HHS Centers for Innovation in Advanced Development and Manufacturing and is aware of what HHS is doing there, so this information can be taken into consideration along with ADM performance and utilization metrics as DOD considers future contract extensions for the ADM capability. GAO is not making recommendations in this report. GAO incorporated agency technical comments, as appropriate.
Since the end of the cold war, there has been a change in the way reserve forces have been used in military operations. During the cold war era, the reserve components were a manpower tool that was rarely tapped. For example, from 1945 to 1989, reservists were mobilized by the federal government only four times, an average of less than once per decade. Since 1990, reservists have been mobilized by the federal government six times, an average of nearly once every 3 years, and have been used extensively to support operations in the global war on terrorism. Since September 11, 2001, about 500,000 reservists have been mobilized, primarily to support operations in Afghanistan and Iraq. This increased use of the reserves has led to greater congressional interest in the types of benefits provided to reservists, including the health insurance provided to reservists and their dependents under TRICARE. Specifically, advocates for expanding TRICARE have suggested that increasing reservists’ access to TRICARE could improve the medical readiness of reservists by facilitating early detection and treatment of medical conditions which otherwise might disqualify a reservist from deploying. Additionally, increased access to TRICARE could smooth the transition to and from active duty for reservists and their dependents, an important factor given the increased mobilizations of reservists. Reservists’ private health insurance coverage is protected by the Servicemembers Civil Relief Act (SCRA) and the Uniformed Services Employment and Reemployment Rights Act of 1994 (USERRA). Included in these acts are protections for reinstating and maintaining reservists’ health insurance. Specifically, when a reservist whose individual coverage was terminated while the reservist was on active duty returns from that duty, SCRA requires private insurance companies to reinstate coverage at the premium rate the reservist would have been paying had coverage not been terminated. It also requires insurance companies to cover most preexisting conditions after a reservist’s insurance is reinstated. USERRA allows reservists to elect to keep employer-provided health benefits while the reservists are absent from employment due to active duty, up to a maximum period of 24 months. For absences of 30 days or less, the employer must continue to pay its share of the premium. For absences of 31 days or more, the reservist may elect to continue the civilian coverage, but the employer may charge the reservist the full premium, including the employer contributions. In addition, under USERRA, employers must generally reinstate reservists’ health coverage upon their reemployment and no waiting period or exclusions may be imposed in connection with that reinstatement. The protections found in SCRA and USERRA also apply to the health benefits of a reservist’s dependents, if those dependents were covered under the reservist’s policy prior to his or her active-duty service. Prior to fiscal year 2004, reservists that were not on active duty had limited eligibility for TRICARE. Specifically, they were entitled to receive treatment through TRICARE at a military medical facility for illnesses or injuries incurred during training or periods of active duty. Family members of reservists had generally not been entitled to use TRICARE, but became eligible if the reservist was serving on active duty for more than 30 days. Beginning in fiscal year 2004, Congress made successive changes to TRICARE that included several provisions which significantly expanded access to TRICARE for reservists that are not on active duty, and their dependents. (For a detailed description of the legislative changes that expanded the TRICARE eligibility of reservists, see app. II.) The NDAA for Fiscal Year 2004 included a temporary provision in which Congress authorized members of the Selected Reserve and the Individual Ready Reserve to enroll in TRICARE if the reservists were eligible for unemployment compensation or ineligible for health care coverage from their civilian employer. Another temporary provision allowed reservists who had received their active-duty orders to use TRICARE for up to 90 days before their active-duty service began. A third temporary provision extended the length of time that service members could use TRICARE under the Transitional Assistance Management Program (TAMP) to 180 days after they were released from active duty. The NDAA for Fiscal Year 2005 indefinitely extended the provisions that provided up to 90 days of TRICARE coverage to reservists prior to the beginning of active-duty service and 180 days after. It also authorized the program that DOD has named TRICARE Reserve Select (TRS), which makes TRICARE coverage available for purchase by certain reservists after their TAMP coverage ends. As originally authorized, TRS provided the option of purchasing TRICARE coverage to members of the Selected Reserve who were mobilized since September 11, 2001, and who continuously served on active duty for 90 days or more in support of a contingency operation. To qualify for TRS, reservists had to enter into an agreement with their respective reserve components to serve in the Selected Reserve for the number of years that they wished to participate in TRS. They could receive 1 year of coverage for each 90-day period of this qualifying service. Electing to enroll in this TRS program was a one-time opportunity, and as originally authorized, the program required reservists to sign the new service agreement and register for TRS before leaving active duty. Figure 1 describes the various periods of TRICARE eligibility for mobilized reservists and their dependents. The NDAA for Fiscal Year 2006 further expanded the number of reservists and dependents eligible to participate in the TRS program. Under the expanded program, which became effective on October 1, 2006, almost all reservists and dependents—regardless of the reservist’s prior active-duty service—have the option of purchasing TRICARE coverage. Similar to the original TRS program, members of the Selected Reserve and dependents choosing to enroll in the expanded TRS program must pay a monthly premium to receive TRICARE coverage. The premium paid by reservists and their dependents for coverage varies based on certain qualifying conditions that must be met, such as whether the reservist has access to an employer-sponsored health plan. Those who would have been eligible under the original TRS program because they have qualifying service in support of a contingency operation pay the lowest premium. In addition, those reservists with qualifying service in support of a contingency operation would now have up to 90 days after leaving active duty to sign the new service agreement required to be eligible for this lowest premium tier. Table 1 describes the Selected Reservists who are eligible to purchase TRS and the associated premiums. The NDAA for Fiscal Year 2007 significantly restructured the TRS program by eliminating the three-tiered premium structure. This law provides that members of the Selected Reserve will be eligible to purchase TRICARE coverage for themselves and their dependents at the 28 percent premium rate regardless of whether they have served on active duty in support of a contingency operation. In addition, eligibility at the 28 percent premium rate will not depend on the length of a service agreement entered into following a period of active-duty service. Instead, reservists will be eligible for TRS for the duration of their service in the Selected Reserve. The law requires DOD to implement these changes no later than October 1, 2007. In order to use TRICARE, reservists must establish their own and their dependents’ eligibility in the Defense Enrollment Eligibility Reporting System (DEERS)—the computerized database which DOD uses to store the identity of active-duty members and reservists, and their dependents. Proper registration in DEERS is necessary to use TRICARE. Reservists are automatically registered in DEERS by reserve component administrative personnel, but reservists must register their dependents and ensure that those dependents are correctly entered into the database. Although TRICARE is administered by TMA, reserve components’ administrative personnel record reservists’ enrollment in DEERS and resolve any DEERS- related problems. Once determined to be eligible for TRICARE, mobilized reservists and their dependents are able to choose among several TRICARE options. These beneficiaries may obtain health care through DOD’s direct care system of military hospitals and clinics, commonly referred to as military treatment facilities (MTF), or through DOD’s system of civilian providers. DOD uses managed care support contractors to develop networks of civilian providers to complement the care available in MTFs. Upon arriving at their final duty station, mobilized reservists must enroll in TRICARE Prime, TRICARE’s managed care option. Their dependents may enroll in TRICARE Prime. If they do not enroll in TRICARE Prime, they may receive care through TRICARE Standard, TRICARE’s fee-for-service option, or TRICARE Extra, TRICARE’s preferred provider option. While all beneficiaries may receive care on a space-available basis at MTFs, TRICARE Prime enrollees have priority for care at these facilities. Under TRICARE, the dependents of mobilized reservists do not pay premiums for their health care coverage; however, depending on the option chosen, they may be responsible for co-payments and deductibles. Table 2 provides an overview of these options. Most reservists have civilian health insurance, and over half of all reservists choose to maintain their civilian health insurance during mobilization. Prior to being mobilized, 80 percent of reservists had civilian health insurance—a rate which is similar to that of the U.S. population between 18 and 64 years old. Insurance coverage varies by rank and age, with officers and senior personnel more likely to have coverage than junior personnel. Reservists with dependents are also more likely to have coverage than those that do not have dependents. Reservists obtained coverage through a variety of sources, and some reservists had more than one source of coverage. Even when reservists were mobilized and eligible for TRICARE, over half opted to keep their civilian health insurance for their dependents during their most recent mobilization. As of December 2006, less than 3 percent of eligible reservists had opted to enroll in TRS. The percentage of reservists with health insurance—80 percent—is similar to that of the U.S. population between 18 and 64 years old. Insurance coverage for reservists varies by rank and age. According to the 2003 Status of Forces Survey, officers and senior-enlisted reservists were more likely to have health insurance than junior-enlisted personnel. Ninety-one percent of officers and 87 percent of senior-enlisted personnel, both of whom have an average age of over 37 years, reported having health insurance; 67 percent of junior-enlisted reservists, with an average age of 25 years, reported having health insurance. Insurance coverage for reservists also varies between those with dependents and those without dependents. For example, 87 percent of reservists with dependents reported having civilian health insurance prior to their most recent activation, while only 65 percent of reservists without dependents reported having civilian health insurance. Similarly, 91 percent of senior-enlisted reservists with dependents had such insurance prior to their most recent mobilization, compared with 70 percent of senior- enlisted personnel without dependents. The percentage of reservists with health insurance has remained relatively consistent over time. In prior work we reported that in 2000, nearly 80 percent of all reservists had health insurance, and 60 percent of junior enlisted reservists had health insurance. Eighty-six percent of reservists with dependents had health insurance and 63 percent of reservists without dependents reported having insurance. Within the general population, there has been a slight decrease in the number of individuals with health insurance over the past 6 years: In 2000, 82 percent of the 18 to 64 year old population had health insurance, as compared with 80 percent in 2005. Reservists and their dependents obtained health insurance through a variety of sources, and some had more than one source of insurance coverage. Figure 2 shows the sources of reservists’ and their dependents’ health insurance prior to mobilization. The primary source of health insurance was civilian employers. About three-quarters of reservists and their dependents were covered by their civilian employers’ health plan, and over one-quarter were also covered by their spouses’ civilian employer’s health plan. Although reservists are required to enroll in TRICARE and their dependents become eligible for TRICARE when the reservists are mobilized, most opt to maintain their civilian insurance for their dependents during their active-duty service. According to the 2003 Status of Forces Survey, 52 percent of reservists maintained their civilian employer’s health insurance during their most recent mobilization. The 2004 Status of Forces Survey found that 85 percent of reservists reported that their civilian employer continued to pay at least a portion of their insurance premium. According to the survey and our interviews with DOD officials, many reservists maintained their civilian health insurance to avoid disruptions associated with changing to TRICARE and to ensure that their dependents could continue seeing their current providers who may not accept TRICARE. On April 27, 2005, TRS became available to certain reservists returning from active duty on contingency operations. In October 2006, TRS became available to an expanded number of reservists based upon their health insurance status. As of December 2006, less than 3 percent of eligible reservists had enrolled in TRS. DOD officials reported that more than 485,000 reservists were eligible to enroll in TRS, and as of December 2006, over 11,000 reservists had enrolled themselves or their dependents in TRS. DOD officials said that one reason for the low enrollment rate may be the result of an enrollment process which, until passage of the NDAA for Fiscal Year 2006, required reservists to take the first step toward enrollment while they were still on active duty. To become eligible to purchase coverage in TRS, a reservist had to execute a service agreement to remain in Selected Reserve status while still serving on active duty. This usually occurred at a demobilization site. Officials told us that they believe that a primary reason that reservists did not take this first step in the enrollment process was that reservists were generally more focused on returning to their families during this period than they were on making decisions about their health insurance. The NDAA for Fiscal Year 2006 changed this requirement so that reservists have up to 90 days from the end of their active-duty service to execute the service agreement and the length of the agreement determines the time period of their eligibility for TRS at the 28 percent premium. The NDAA for Fiscal Year 2007 eliminated the service agreement requirement and, under this law, eligibility for TRS will end only upon the termination of the reservist’s service in the Selected Reserve. Finally, some DOD officials said that a lack of education about the program may also have resulted in low participation rates. The increased number of reservists being mobilized and changing TRICARE eligibility requirements for reservists have challenged DOD in its efforts to educate reservists and their dependents about TRICARE. Reservists have reported that they and their dependents are not well informed about TRICARE, with less than 20 percent saying they were well informed. The primary educational resources DOD relies on are the TRICARE briefings provided by each reserve component to mobilized reservists just prior to deployment, and those given at demobilization sites when reservists return from deployment. These briefings are supplemented by family support programs, Web sites, toll-free customer assistance numbers, and print materials. DOD officials said that education could be improved for reservists and their dependents by providing TRICARE briefings to reservists at times not associated with mobilization or demobilization, targeting TRICARE education for dependents, and improving other existing educational resources. DOD has worked to improve several of its tools for educating reservists about TRICARE, but it currently has no plans to require that the reserve components provide additional TRICARE briefings. Increased mobilizations of reservists and continuing changes to TRICARE eligibility have increased the number of reservists and dependents that DOD must educate about TRICARE. The terrorist attacks of September 11, 2001, marked the beginning of a substantial increase in the number of reservists being mobilized and therefore eligible for TRICARE. From 1996 to 2001, DOD provided TRICARE education to approximately 10,000 mobilized reservists annually. Since the beginning of fiscal year 2002, DOD has provided TRICARE education to about 125,000 mobilized reservists annually, according to DOD officials. Steadily expanding TRICARE eligibility for reservists has also placed new challenges on DOD to continually update its educational programs. These expansions (described in app. II) have required DOD to revise its training materials, update its Web site, and retrain benefits counseling and assistance coordinators to provide more current information to reservists and their dependents. For example, the pre-active duty benefits discussed earlier were expanded, from 30 days to up to 90 days prior to the date active-duty service begins. TAMP, which provides continued TRICARE coverage to reservists separating from active duty, was extended from 60 days to 180 days. In fiscal year 2005, with the initial implementation of TRS, DOD developed new educational materials to inform reservists and their dependents of their new benefits. The NDAA for Fiscal Years 2006 and 2007 each revised the provisions of TRS. In response to these requirements, DOD updated its educational tools because the tools describing who is eligible, what premiums they pay, and when they must register changed with each revision. Reservists reported that they and their dependents are not well informed about TRICARE. TRICARE BCACs that responded to our survey in 2006 reported that the most commonly experienced problem that reservists and their dependents face when using TRICARE is a poor understanding of the program. According to DOD’s 2003 Status of Forces Survey, the last time DOD surveyed reservists about their knowledge of TRICARE, less than 20 percent of all reservists believed that they were well-informed about their TRICARE benefits. These findings are consistent with our past work on civilian health coverage of reservists and their dependents, and they indicate that DOD has been challenged by the task of educating reservists about TRICARE since at least 2000. In past work, we found that reservists and their dependents that had dropped their private health insurance for TRICARE reported problems understanding TRICARE. We concluded that they could benefit from improved TRICARE education. Figure 3 illustrates data from DOD’s 2003 Status of Forces Survey showing reservists’ opinions of how well-informed they felt about various aspects of TRICARE. Reservists’ two most frequently cited areas of confusion included knowing which doctors participated in the TRICARE provider network, and which services are covered by TRICARE. Surveys indicate a lack of awareness about DOD programs designed to assist family members in learning about and using TRICARE. DOD officials said that they were interested in reaching out to reservists’ dependents because they recognize that reservists’ dependents, specifically spouses, often play a major role in the family’s understanding and use of TRICARE. However, DOD’s 2000 Survey of Reserve Component Members indicated that fewer than 50 percent of the spouses of mobilized reservists were aware of the family support programs designed to assist them in understanding and using TRICARE. The 2002 survey showed that fewer than 10 percent of spouses used these programs. DOD relies on a several methods to educate reservists and their dependents about TRICARE. TRICARE briefings by each reservist’s reserve component are the primary tool DOD uses to educate reservists about TRICARE. The briefings generally occur when a reservist is mobilized and when the reservist returns from a mobilization. However, many DOD officials and TRICARE BCACs have said that this is not an ideal time for reservists to initially learn about TRICARE. According to DOD officials, these days of training are often so full of critical information that it is difficult for the reservist to absorb all of the details of TRICARE. These briefings also occur at a time when a reservist may have already been eligible for TRICARE for up to 90 days without realizing it. Similarly at demobilization sites, where reservists are debriefed upon returning from theater, officials tell us that many reservists are focused on returning home to their families rather than learning the details of their TRICARE benefits. In addition, briefings at mobilization and demobilization sites typically do not include reservists’ dependents. Family support programs designed to educate reservists’ dependents about TRICARE are used by most of the reserve components, including the National Guard, Air Force, Army, Navy, and the Marines. DOD officials said that these programs are important because reservists’ dependents often play a major role in understanding and using reservists’ TRICARE benefits. Family support programs are intended to increase knowledge about a variety of military benefits, including TRICARE. For example, the Air Force Reserve Command provides TRICARE information and assistance at family support offices. In order to provide the most current information to reservists and their dependents, personnel at these locations are educated regularly about new programs that affect reservists. Similarly, the National Guard Bureau has established family assistance centers that provide support for dependents of deployed soldiers in the National Guard and other reserve components, as well as assistance for demobilizing soldiers. However, reservists have reported a lack of awareness about these programs and fewer than 10 percent of reservists’ spouses said they took advantage of these programs. DOD relies on other educational resources such as the TRICARE Web site, toll-free customer assistance phone numbers, the use of BCACs, and print materials sent directly to reservists and their dependents. However, most of these resources are helpful only to reservists and their dependents that actively seek TRICARE information; they do not reach out to reservists that are not already pursuing the information. In a survey administered by DOD in 2005, a third of reservists cited the TRICARE Web site as their primary source of information when they seek assistance. However, DOD officials acknowledged that the site was cumbersome, with a satisfaction rate of less than 60 percent. DOD reported in January 2006 that its TRICARE Web site contained over 538,000 pages of content and over 300 subsites. In DOD’s 2005 survey, close to 13 percent of reservists cited a preference for obtaining assistance from toll-free customer assistance numbers. However, as of December 2006, the TRICARE Web Site listed at least 25 different toll-free customer assistance numbers. This doesn’t include any toll-free numbers that each reserve component might have available. This large number of TRICARE customer service numbers confuses beneficiaries. TRICARE users ranked phone and electronic sources of information as the most difficult to use. DOD’s Communications and Customer Service Group acknowledged that such a multitude of customer assistance numbers is sometimes not helpful. Finally, less than 3 percent of reservists said that they rely on print materials such as newspapers and newsletters. Although DOD has updated some of its print materials with information about TRS, these materials are not reaching all reservists. DOD said that the reserve components’ administrative personnel update the file of reservists’ addresses in DEERS when notified by the member, but incorrect addresses remain for approximately 10 percent of reservists. According to DOD officials, this results in approximately 10 percent of TRICARE mailings being returned to sender as misdirected mail. Individual reserve units also provide TRICARE education to their members. This is sometimes a reservist’s primary source of information about his or her TRICARE benefits. However, DOD officials said the quality of this information can vary greatly across units and depends largely on the individuals charged with providing the information. DOD officials recognize that TRICARE education could be improved, but they currently do not plan to require that the reserve components provide additional TRICARE briefings. DOD officials have suggested that TRICARE education could be made more effective by supplementing the TRICARE briefings provided at mobilization and demobilization sites with annual briefings during training periods when reservists are not being mobilized and are therefore better able to focus on the material covered in the briefing. DOD officials said that briefings at mobilization sites are a logical time to remind reservists of their available TRICARE benefits, but this is not the best time to expose reservists to TRICARE information for the first time. However, as of July 2006, DOD had no plans to require reserve components to increase the number of TRICARE briefings they provide to reservists or change the time that they provide them. Half of the TRICARE BCACs that responded to our survey said that education should be improved. Some suggested targeting additional education to dependents of mobilized reservists. Other DOD officials agreed and said that the spouses of reservists are generally responsible for the family’s health care decisions when the member is mobilized, so dependents should therefore be a focus of DOD’s educational efforts. However, DOD officials we interviewed noted that when dependents are invited to briefings they often do not attend. They said that publicizing information to families could be a challenge, but suggested that reservists and their families also bear some responsibility for being aware of these programs. In November 2006, DOD launched a redesigned TRICARE Web site and TMA has plans to reduce the number and redundancy of pages on the Web site. DOD officials acknowledge that they have inaccurate addresses on file for some reservists. They continue to send reminders to reservists to keep the information in DEERS current, but they expect there will always be a number of incorrect addresses on file. A majority of reservists report that they are satisfied with their TRICARE benefits; however, some reservists have experienced difficulties when using TRICARE. According to our interviews with reservists and DOD’s most recently available data, over half of the reservists who used TRICARE were satisfied with it. Additionally, 70 percent of reservists thought that TRICARE was either equal to or better than their civilian health insurance. However, when reservists did experience problems with TRICARE, the most commonly reported difficulties were (1) a general lack of understanding about the TRICARE program, (2) establishing TRICARE eligibility, (3) obtaining TRICARE assistance, and (4) finding a health care provider. DOD officials said they believed that some of these problems stemmed from difficulties reservists encounter in establishing their eligibility in DEERS, which is done through reserve component administrative personnel. Registration in DEERS is necessary for reservists and their dependents to use TRICARE. The officials we interviewed observed that helping reservists understand their benefits, establishing reservists’ eligibility for TRICARE, and addressing specific concerns is complicated because responsibility for resolving problems is divided across organizational units. TRICARE is administered by TMA, but recording reservists’ eligibility in DEERS is managed by each reserve component’s administrative personnel. In our interviews with over 100 reservists, we found that over half reported that they were satisfied with their TRICARE benefit. This was also supported by DOD’s 2004 Status of Forces Survey, which showed that 70 percent of reservists thought TRICARE was either equal to or better than their civilian health insurance plans. DOD’s 2003 Status of Forces Survey showed that over 60 percent of the reservists who used TRICARE reported being satisfied with their own TRICARE benefits and with their dependents’ TRICARE benefits. Only 20 percent of reservists reported dissatisfaction with the benefits in the 2003 Status of Forces Survey. Figure 4 illustrates how specific aspects of TRICARE compared with reservists’ civilian health insurance. Some reservists and their dependents experienced difficulties when they used TRICARE. Our surveys of BCACs and interviews with reservists and DOD officials indicated that when reservists experienced difficulties using TRICARE, the most common difficulties included a lack of knowledge about TRICARE benefits, problems establishing TRICARE eligibility, obtaining TRICARE assistance, and finding medical providers. These findings were consistent with data from DOD’s 2003 Status of Forces Survey. Fifty-eight percent of the TRICARE BCACs that responded to our survey reported that the biggest problem reservists and their dependents faced when using TRICARE is their ability to understand TRICARE. Many reservists and their dependents lack a basic understanding of TRICARE. According to the 2004 Status of Forces Survey, about 41 percent of reservists reported that their dependents did not use TRICARE insurance because of the complexity of TRICARE. Some BCACs said that reservists and their dependents continue to experience difficulties understanding the complexity of the various options, knowing which benefits are covered, understanding the referral process and authorizations required, and the changing enrollment requirements. For example, enrollment requirements change throughout the periods before, during, and after a reservist’s active-duty service. Dependents of reservists who have been ordered to active duty for a period of more than 30 consecutive days may enroll in TRICARE Prime if they wish to be covered by that option. Dependents enrolled in TRICARE Prime must then re-enroll to continue TRICARE Prime coverage during their TAMP period when the reservist returns from active duty. However, dependents using TRICARE Extra and TRICARE Standard are not required to re-enroll to receive TAMP benefits. Access to TRICARE could be impaired if reservists and their dependents fail to adhere to the changing enrollment requirements. Establishing eligibility for TRICARE in the DEERS database—DOD’s computerized database used to record TRICARE eligibility—has been problematic for many reservists and their dependents. Almost half of the BCACs that responded to our survey said that the process for establishing TRICARE eligibility in DEERS needed to be improved. DEERS stores the identity of reservists, dependents, and others who are entitled to TRICARE benefits as well as their dates of eligibility. BCACs that we surveyed and other DOD officials said that many reservists and their dependents are incorrectly entered into DEERS when the reservists are mobilized. When reservists return from a mobilization, they are required to update their status in DEERS and to keep their dependents’ information updated as well in order to receive the benefits for which they are eligible. Reservists sometimes do not do this. When DEERS is not properly updated, reservists or their dependents might be denied medical care, or be charged incorrectly for medical services. According to DOD officials we interviewed, dependents of active-duty members also have problems with DEERS, but these problems are accentuated for dependents of reservists because their eligibility status can change more frequently. DOD does not collect data on how many reservists and their dependents experience problems with the information entered into the DEERS system. However, DOD officials said that they believe that some of the problems reservists face in using TRICARE, including the other problems described in this report, stem from problems in their DEERS enrollment. This problem is exacerbated by the fact that BCACs and other TMA staff are not able to resolve reservists’ problems with DEERS because each reserve component’s administrative personnel, rather than TMA, record reservists’ eligibility information in DEERS. Reservists often do not realize that they need to seek assistance with DEERS from a different office than that from which they would seek benefits assistance. For example, a reservist who was not properly registered in DEERS might seek assistance from a TRICARE BCAC, who would be unable to assist the reservist with his or her problem, rather than the administrative personnel who could assist with these problems. Almost a third of the BCACs that responded to our survey said that many reservists and their dependents experience difficulties in obtaining TRICARE assistance when problems or questions about TRICARE arise. Many reservists do not have a designated TRICARE expert within their unit and are not aware of the many resources available to assist them with their TRICARE benefits. BCACs we surveyed also reported that when reservists call for information, sometimes even unit-designated TRICARE representatives are confused by reservists’ benefits and cannot answer beneficiary questions. Some BCACs responsible for assisting reservists in using TRICARE do not have access to DEERS and are therefore unable to provide accurate information about TRICARE eligibility to reservists and their dependents. Over a quarter of the BCACs that responded to our survey reported that finding a medical provider is one of the problems most commonly experienced by reservists and their dependents when using TRICARE benefits. Some DOD officials we spoke with also said that reservists and their dependents experience difficulties finding medical providers that accept TRICARE. However, other work we have done reviewing access to care for TRICARE beneficiaries indicates that there are a large number of TRICARE providers accepting new patients except where there are few practicing providers in general, such as in geographically remote areas. We could not determine whether reservists that experienced difficulty finding TRICARE providers lived in geographically remote areas. Changes to reservists’ TRICARE eligibility have resulted in DOD having to educate a growing number of reservists and their dependents about their eligibility requirements and benefits under TRICARE. Despite DOD’s use of a variety of tools to educate reservists about TRICARE, reservists, BCACs, and DOD officials continue to suggest that TRICARE education could be improved by providing TRICARE briefings at times other than when reservists are being mobilized or returning from mobilizations. For example, reservists have other required training periods during the year where a discussion of TRICARE benefits could be a part of the program. In addition, while reservists and their dependents become eligible for TRICARE up to 90 days before the reservists’ active-duty service begins, they might not learn of this eligibility until the TRICARE briefing they receive at the mobilization site. Despite this shortcoming, DOD has no plans to add additional TRICARE briefings during times other than mobilization and demobilization. We recommend that the Assistant Secretary of Defense for Health Affairs improve TRICARE education for reservists and their dependents by providing additional TRICARE briefings to reservists and their dependents. These briefings could be provided to reservists during training periods not associated with mobilizations or at the time that reservists are first informed of their impending mobilization. DOD provided written comments on a draft of this report. DOD partially concurred with our recommendation, agreeing that information about TRICARE should be provided to reservists and their family members when they are first informed of a pending mobilization of the member or any time a member is ordered to active duty or full-time National Guard duty for more than 30 days. However, DOD did not agree that providing additional briefings during periods not associated with mobilizations would be beneficial. DOD’s comments are reprinted in appendix III. DOD noted that reservists’ training time is limited and must be prioritized to maximize its value. DOD further noted the difficulty in holding the interest of an audience to describe a benefit for which they are not yet eligible. DOD stated that it has provided an abundance of information about TRICARE to reservists and their family members. As we noted earlier, DOD has revised its training materials and updated its Web site to provide more current information to reservists and their dependents. However, our surveys and interviews with BCACs and reservists indicate that these materials are not reaching all reservists, but instead reach only those that actively seek TRICARE information. Furthermore, we understand the importance for DOD to effectively use limited training time. However, we continue to believe that providing TRICARE briefings whenever time becomes available during reservist training periods—a time when reservists are not distracted by other concerns associated with mobilization—would be an effective way to help ensure that reservists are aware of the most current information about TRICARE. DOD also provided technical comments, which we have incorporated where appropriate. We are sending copies of this report to the Secretary of Defense, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-7119. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Another contact and staff acknowledgments are listed in appendix III. The National Defense Authorization Act (NDAA) for Fiscal Year 2004 directed that we study the health insurance coverage of reservists and their dependents, DOD’s efforts to provide assistance specifically to reservists and their dependents to facilitate their access to and use of TRICARE benefits, and reservists’ and their dependents’ experiences using TRICARE. To do this, we (1) identified the extent to which reservists have civilian health insurance, (2) examined DOD’s efforts to educate reservists and their dependents about TRICARE, and (3) described reservists’ level of satisfaction with TRICARE and the types of problems reservists and their dependents experienced when using TRICARE. To determine the extent to which reservists had civilian health insurance, we obtained data from the Department of Defense’s (DOD) 2003 and 2004 Status of Forces Surveys of Reserve Component Members and DOD’s 2000 Survey of Reserve Component Members. We discussed the limitations of the surveys with DOD officials and determined that the survey data were reliable for our purposes. We did not independently assess the reliability of DOD’s data. To learn about the extent of TRICARE benefits available to reservists and their dependents, we reviewed pertinent legislation, regulations, documents, reports, and information related to the TRICARE health benefits available to activated reservists and their dependents. In addition, we interviewed officials in the offices of the Assistant Secretary of Defense for Reserve Affairs, the TRICARE Management Activity (TMA), the Defense Manpower Data Center, and representatives of the seven reserve components. We also interviewed members of selected reserve military service organizations: the Enlisted Association of the National Guard of the United States; the Reserve Officers Association of the United States; and the Military Officers Association of America. Finally, we reviewed and evaluated reports from the Congressional Research Service and Congressional Budget Office as well as prior GAO reports. To examine DOD’s efforts to educate reservists and their dependents about TRICARE, we interviewed representatives from DOD’s TMA, the Office of Reserve Affairs, and each of the seven reserve components about their efforts to educate reservists about TRICARE. We also interviewed officials from outside stakeholder groups. We interviewed over 100 reservists from the Army National Guard and the Navy Reserves. We selected these two groups because they had large numbers of reservists demobilizing that we were able to interview during the course of our work. We used these interviews to validate and update information that we had gathered from the various surveys that we used as the basis of our work. We also reviewed DOD TRICARE Web sites and other materials designed to inform servicemembers and their dependents about TRICARE. We developed and administered a Web-based survey of benefit counseling and assistance coordinators (BCAC) who respond to problems encountered by reservists and their dependents when they use TRICARE. With the assistance of DOD officials, we identified BCACs who had direct experience providing TRICARE counseling and assistance to reservists and their dependents. We received survey responses from 226 BCACs who were currently engaged in providing TRICARE counseling and assistance. Because these 226 respondents were not selected at random from a larger population of known BCACs, the information they provided cannot be projected to any other BCACs. In addition, we reviewed our prior work on reservists and military health care. We also used DOD’s 2003 and 2004 Status of Forces Surveys of Reserve Component Members, DOD’s 2002 Survey of Spouses of Activated National Guard and Reserve Component Members, and DOD’s 2000 Survey of Reserve Component Members to provide us with information about reservists’ opinions about TRICARE. To describe reservists’ level of satisfaction with TRICARE and the types of problems reservists and their dependents experienced when using TRICARE, we interviewed DOD officials as mentioned above, and we relied on our own survey of BCACs. We used information from the interviews of reservists as described above. We also obtained and analyzed the results of the DOD’s 2003 and 2004 Status of Forces Surveys of Reserve Component Members. Finally, the NDAA for Fiscal Year 2004 mandated that we describe DOD’s options for continuing civilian health care coverage while reservists are mobilized. We did not address this part of the mandate in this report because it was addressed in our October 19, 2005 report, Defense Health Care: Health Insurance Stipend Program Expected to Cost More Than TRICARE But Could Improve Continuity of Care for Dependents of Activated Reserve Component Members (GAO-06-128R). We performed our work from October 2005 through December 2006 in accordance with generally accepted government auditing standards. Appendix II: Selected Legislation Pertaining to TRICARE Eligibility for Reservists Contained a provision which allowed nonactivated members of the Selected Reserve and the Individual Ready Reserve and their family members to enroll in TRICARE if the member was eligible for unemployment compensation or was ineligible for health care coverage from his or her civilian employer. Another provision allowed reservists who had pending active-duty orders to use TRICARE for up to 90 days before their active-duty service began. A third provision extended the length of time which service members, including demobilized reservists, could use TRICARE after they had been released from active duty to 180 days. These provisions were set to expire on December 31, 2004. Indefinitely extended the temporary provision passed in 2003 which allowed reservists with pending active-duty orders to use the military health care system up to 90 days before their active-duty service began. It also indefinitely extended the temporary provision which extended the length of time which service members could use TRICARE after they had been released from active-duty service to 180 days. This legislation did not extend the provision which authorized TRICARE access for reservists who were eligible for unemployment compensation or were ineligible for health care coverage from their civilian employer. Another provision provided TRICARE Standard coverage through a new program that DOD named TRICARE Reserve Select (TRS). This was made available to reservists who had been activated for a period of more than 30 days in support of a contingency operation on or after September 11, 2001, and who agreed to continue serving in the Selected Reserves after release from active duty. Under this provision, reservists are eligible to purchase TRICARE coverage for themselves and their family members for up to 1 year for each 90 days of active duty served, or the number of full years for which they agreed to continue service, whichever is less. Reservists pay a monthly premium of 28 percent of the total amount determined by the Secretary of Defense on an appropriate actuarial basis as being reasonable for coverage. Extended eligibility for TRICARE Standard to all Selected Reserve component personnel. Those reservists who meet TRS requirements established in the NDAA for Fiscal Year 2005 will continue to pay the 28 percent premium. Those who are eligible for unemployment compensation, are self-employed, or who are not eligible for insurance through an employer- sponsored plan will pay 50 percent. Those who do not qualify for the two lower premium levels, such as those who are eligible for employer-based insurance but prefer to enroll in TRICARE, will pay 85 percent. Restructures the TRS program by eliminating the three-tiered premium structure. Establishes that reservists who are eligible for the Federal Employees Health Benefit Plan are not eligible to purchase TRICARE coverage. Under this provision, members of the Selected Reserve will be eligible to purchase TRICARE coverage for themselves and their dependents at the 28 percent premium rate regardless of whether they have served on active duty in support of a contingency operation. In addition, eligibility will not depend on the length of a service agreement entered into following a period of active duty; instead, reservists will be eligible for TRS for the duration of their service in the Selected Reserve. DOD is required to implement these changes by October 1, 2007. percent of the total amount determined by the Secretary of Defense on an appropriate actuarial basis as being reasonable for coverage. DOD did not implement this provision before it expired on December 31, 2004, citing a lack of authorized funds. In addition to the contact named above, Thomas Conahan, Assistant Director; Cathleen Hamann; Adrienne Griffin; Carolina Morgan; and Suzanne Worth made key contributions to this report.
Since 2001, the number of reservists mobilized for active duty has increased dramatically. Congress has expanded reservists' and their dependents' eligibility for TRICARE, the Department of Defense's (DOD) health insurance program. The National Defense Authorization Act (NDAA) for Fiscal Year 2004 directed GAO to examine the health insurance coverage of reservists and their dependents. This report (1) identifies the extent to which reservists have civilian health insurance, (2) examines DOD's efforts to educate reservists and their dependents about TRICARE, and (3) describes reservists' level of satisfaction with TRICARE and the types of problems reservists and their dependents experienced when using it. To do this, GAO relied on interviews with DOD and DOD's survey data. GAO also administered a survey of TRICARE benefit assistance coordinators. Eighty percent of mobilized reservists have civilian health insurance--a rate similar to that of the U.S. population between 18 and 64 years old. The number of reservists with civilian health insurance varies among reservists, with older reservists and reservists of higher rank having a greater rate of insurance than younger reservists and reservists of more junior rank, and reservists with dependents being more likely to have insurance than reservists without dependents. Reservists and their dependents obtained coverage through a variety of sources and over half of all reservists kept their civilian health insurance during mobilizations, even though they were eligible to enroll in TRICARE. Many reservists reported that they maintained their civilian coverage to avoid disruptions associated with a change to TRICARE and to ensure that their dependents could continue seeing their current providers who might not accept TRICARE. Increased mobilizations of reservists and successive legislative changes that have increased reservists' and their dependents' eligibility for TRICARE have complicated DOD's efforts to educate reservists about TRICARE. DOD's primary educational tools are the TRICARE briefings provided at mobilization sites and demobilization sites. According to DOD officials, these days of training are often so full of critical information that it is difficult for reservists to absorb all of the details of TRICARE. These briefings also occur at a time when a reservist may have already been eligible for TRICARE for up to 90 days without realizing it. These briefings are supplemented by family support programs, Web sites, toll-free customer assistance numbers, and print materials. DOD officials recognize the need to improve TRICARE education, but do not plan to provide additional TRICARE briefings for reservists and their dependents. When reservists used TRICARE, most reported that they were satisfied with TRICARE, although some reported experiencing difficulties. Over 60 percent of reservists who used TRICARE reported being satisfied. In addition, 70 percent of reservists thought TRICARE was either equal to or better than their civilian health insurance. However, according to DOD's and GAO's surveys, when reservists and their dependents did experience problems with TRICARE, a few of the most frequently reported problems include difficulties understanding TRICARE, establishing TRICARE eligibility, obtaining TRICARE assistance, and finding a health care provider that accepts TRICARE.
Question 1: Does DOE have statutory authority that specifically authorizes it to spend the funds appropriated to other federal agencies and use those funds for LDRD? The Strom Thurmond National Defense Authorization Act for Fiscal Year 1999 authorizes DOE to conduct R&D at DOE facilities for “other departments and agencies of the government . . .” The act requires that when DOE conducts R&D for other agencies, it impose a charge to recover its costs of conducting the work. The charge must include both direct costs that DOE incurs in carrying out the work and all associated overhead costs. When DOE assesses the charge to recover its costs, the ordering agency transfers amounts from its appropriation to DOE to pay the assessed charge. An interagency transaction, like that authorized by section 7259a, is not unlike a contractual transaction. Because of a statutory prohibition on transferring funds between two appropriations, federal agencies require specific statutory authority, like section 7259a, to engage in interagency transactions. In other words, federal agencies require statutory authority to contract with each other. Section 7259a permits other federal agencies to contract with DOE for R&D. When other agencies transfer amounts to DOE to pay the charge that DOE assesses under section 7259a, and DOE uses those amounts to defray the costs it incurred in carrying out the work for the other agency, DOE is not “spending” funds appropriated to another agency any more than a private vendor with whom the agency had contracted for services “spends” federal appropriations when it uses amounts received in payment from the federal agency to defray its costs of doing business. As in a contractual transaction, when a federal agency transfers amounts to DOE in payment of the section 7259a charge, the funds transferred become DOE funds and are available for the same purposes and uses as the other amounts in the DOE appropriation account to which they are credited. When DOE agrees to carry out R&D for another agency and conducts the work in one of its laboratories, DOE asks the contractor who operates its laboratory to undertake the R&D tasks. In that case, the cost to DOE of having its contractor conduct these tasks is a direct cost that DOE is required by section 7259a to include in the charge that it assesses the other agency. The other agency is not paying DOE’s contractor; in fact, the other agency has no legal relationship with DOE’s contractor. The amount DOE owes its contractor for this work is determined by the terms of the contract that DOE has with its contractor. Included in the amounts DOE pays its contractor is an amount for LDRD. The National Defense Authorization Act for Fiscal Year 1991 requires DOE to pay its laboratory contractors an amount for LDRD, not to exceed 6 percent of the amount that DOE pays to the contractor for national security activities. Consequently, DOE is not “using” funds appropriated to other federal agencies for LDRD. LDRD is a cost that DOE incurs, both statutorily and contractually, whenever the laboratory’s contractor performs work for DOE. When another agency asks DOE to conduct R&D on its behalf, and DOE, in performing that work incurs an LDRD cost, DOE, under section 7259a, properly includes that cost in calculating what it will charge the ordering agency. Just as a private vendor factors its costs of doing business into the price it charges for services rendered, DOE, under section 7259a, must factor its costs of doing business, including LDRD, into the amount it charges other agencies. That DOE might use monies properly transferred from another agency to defray the LDRD amount it owes its laboratory contractor does not mean that DOE is “using” another agency’s funds for LDRD any more than a private vendor is using a federal agency’s appropriation when it applies amounts paid by a federal agency for services rendered to defray its costs of doing business. Question 2: Congressional appropriations laws must comply with defense and domestic firewalls in Senate budget resolutions adopted by Congress. What mechanism has DOE had in place to ensure that funds appropriated for defense purposes are used only for defense activities and that funds appropriated for domestic purposes are used only for activities in support of those domestic agencies? This question applies to both LDRD conducted with DOE funds and LDRD conducted with funds received from other federal agencies. As discussed in our response to question 1, DOE’s funds support the LDRD programs at participating DOE contractor-operated laboratories—not the appropriations of other agencies. Under the terms of the agreement when another federal agency asks DOE to perform work on its behalf, the agency agrees to reimburse DOE all costs that DOE incurs in performing the work. In funding and carrying out LDRD, DOE and the laboratories must comply with statutory requirements imposed on them. For example, DOE and its contractor-operated laboratories are required to comply with the National Defense Authorization Act for Fiscal Year 1998, which requires that when DOE uses its appropriation for nuclear weapons activities to pay for LDRD, the LDRD must support projects in DOE’s national security mission and when DOE uses its environmental restoration, waste management, or nuclear materials and facilities stabilization appropriation to pay for LDRD, the LDRD must support projects in these mission areas. In addition, the Homeland Security Act of 2002 specifically directs that when DHS orders work from DOE’s laboratories, the laboratories must use the associated LDRD funds only for purposes that benefit DHS missions. Officials at each of the laboratories we visited told us that, because LDRD promotes cutting-edge science and technology, much of the R&D conducted is basic research that, by definition, can result in applications that benefit both defense and civilian agencies. Thus, projects proposed with the intention of supporting a defense mission may lead to cross- cutting applications that benefit Homeland Security or other civilian agencies. Specifically, officials at DOE’s weapons laboratories cited examples in sensor research for identifying traces of radiological and biological agents that had benefited both the nuclear nonproliferation and homeland security missions. They also mentioned LDRD projects that had applications for the NIH’s cancer research programs, as well as DHS and DOE. Question 3: Which federal agencies, in addition to DOE, have a similar process whereby up to 6 percent of funds appropriated to the agency (or any other federal agency) may be diverted to purposes other than those for which the Congress appropriated the funds? NASA’s Jet Propulsion Laboratory, operated by the California Institute of Technology, is the only federal laboratory we identified that includes an assessment on the work performed for other federal agencies to support a laboratory-directed R&D program. In fiscal year 2003, the Jet Propulsion Laboratory Director’s R&D Fund received about $91,000 through an assessment of .025 percent on all projects over $250,000 performed for other federal agencies—primarily DOD. The Director’s R&D Fund also received $3.5 million from NASA’s research directorates that was pro rated on the basis of their expected R&D funding at the Jet Propulsion Laboratory. Similar to DOE’s LDRD program, the Director’s R&D Fund is designed to promote innovative science and new technology. The fund also encourages collaborative work with the California Institute of Technology, other universities, other federal laboratories, and industry. The Jet Propulsion Laboratory’s director awards funding to research projects on the basis of peer review of their scientific merits. The Air Force’s Lincoln Laboratory, operated by the Massachusetts Institute of Technology, has a Directed Defense Research and Engineering program. However, unlike LDRD, the Defense budget provides the Directed Defense Research and Engineering program with about $25 million annually through a direct appropriation from the Congress—Lincoln does not include an assessment in its indirect-cost rate to finance its program. Similar to DOE’s LDRD program, Lincoln Laboratory’s director awards funding to research projects on the basis of peer review of their scientific merits. The Army and the Navy also reported that their In-house Laboratory Independent Research program is fully funded by their appropriations. NRC’s Center for Nuclear Waste Regulatory Analyses, operated by the Southwest Research Institute, also has a small self-initiated research program. However, NRC’s center does not receive funding support from other federal agencies. Question 4: What mechanisms has DOE had in place to ensure that the department fully complies with all statutory and report language in appropriations bills for itself and other federal agencies when DOE spends funds on their behalf? DOE has issued a departmental order for the LDRD program and clarifying memoranda and guidance to ensure departmental compliance with statutory requirements and congressional direction in committee reports. These include the following: The National Defense Authorization Act for Fiscal Year 1991 established an annual 6-percent funding limit on LDRD. Subsequently, DOE’s Order 413.2A established departmental requirements for the LDRD program, and each laboratory establishes a fixed rate for the LDRD assessment each year that ensures compliance with the 6-percent funding limit. DOE officials told us that the department does not need to link the LDRD funding from non-DOE sources to specific LDRD projects because it treats LDRD as an indirect cost that, under cost accounting standards, must be pooled with other LDRD funds and not tracked back to a specific funding source. The DOE officials added that LDRD costs are charged to all laboratory customers at the same rate and are considered a normal cost of doing business. The National Defense Authorization Act for Fiscal Year 1998 limited the use of LDRD funds (1) originating from nuclear weapons funding to LDRD projects that support DOE’s national security mission and (2) originating from environmental restoration, waste management, or nuclear materials and facilities stabilization for LDRD projects that support these missions. DOE and laboratory LDRD managers told us that they have achieved the act’s funding requirements through (1) the identification of areas of emphasis that are likely to benefit DOE’s national security and environmental management missions in each laboratory’s annual LDRD program plan and its calls for proposals and (2) the laboratory’s LDRD manager’s and DOE site office’s review of proposals recommended for funding. The National Defense Authorization Act for Fiscal Year 1998 also required that DOE report to the Congress on the extent to which the LDRD Program has met the objective of supporting R&D with long-term application to national security. DOE’s most recent report to the Congress stated that, in fiscal year 2003, the laboratories spent about $356 million for LDRD, of which defense customers, through reimbursement to DOE, provided $243 million and nondefense customers, through reimbursement to DOE, provided $113 million. DOE concluded that about $268 million of the LDRD funding supported projects expected to benefit the defense and national security missions and about $283 million of the LDRD funding supported projects expected to benefit the nondefense mission areas. The Conference Report accompanying the Energy and Water Development Appropriations Act for Fiscal Year 2002 directs that (1) when accepting funds from another federal agency for work, DOE notify the agency in writing how much will be used for LDRD and (2) the Secretary of Energy affirm each year that all LDRD projects support R&D that benefits the sponsoring agencies’ programs and are consistent with their appropriations acts. On April 30, 2002, the Secretary of Energy issued a memorandum to the Under Secretary for Nuclear Security and the Under Secretary for Energy, Science and Environment that provided guidance directing that all DOE agreements to perform R&D for other federal agencies provide notice about each participating laboratory’s LDRD program, including (1) the applicable indirect-cost rate, (2) an estimate of the associated cost, and (3) an explanation of the LDRD program’s purpose. Furthermore, each agreement to perform work states that DOE will conclude that, by approving the agreement and providing funds, the agency acknowledges that LDRD benefits the agency and is consistent with its appropriation requirements. DOE officials told us that the DOE site office responsible for the laboratory typically sends this notification to the program manager or contracting officer at the sponsoring agency. The Homeland Security Act of 2002 requires that DHS funds are not to be expended for LDRD unless such activities support DHS missions. On February 28, 2003, the Secretary of Energy and the Secretary of Homeland Security entered into a Memorandum of Agreement that establishes a framework for DHS to access the capabilities of DOE’s national laboratories and production facilities. On April 21, 2003, DOE’s Deputy Secretary issued DOE Notice 481.1A, Reimbursable Work for Department of Homeland Security, which provided information on the process by which DHS would place orders for reimbursable work activities at the DOE laboratories. The DOE notice includes provisions that DOE notify DHS of LDRD charges in the cost proposals and that DHS acknowledge the benefits of LDRD prior to final approval. DHS has set up centers at each of the DOE laboratories to facilitate its access, and DOE and DHS are still formalizing their working relationship. Question 5: To what extent does the leadership of federal agencies that give funds to DOE for its laboratories to conduct R&D on their behalf fully understand that up to 6 percent of the funds may be diverted under DOE’s LDRD program to purposes that have nothing to do with the purpose for which the Congress originally appropriated the funds? Please detail the written notifications that DOE has issued in response to the requirement in the Conference Report for the Energy and Water Development Appropriations Act for Fiscal Year 2002 that DOE notify federal agencies in writing how much of their funds may be diverted to LDRD. Senior officials at each of the six federal agencies we contacted stated that their offices were aware that the DOE laboratories included a charge of up to 6 percent for LDRD in the costs they are required to reimburse DOE. Specifically, the senior officials in the Office of the Chief Financial Officer (CFO) and/or the Office of General Counsel at each agency told us that the LDRD program’s inclusion as an indirect cost does not limit their ability to comply with their agency’s statutory or appropriations requirements. Similarly, none of the research managers and/or contracting officers at these agencies expressed concern about the LDRD program or its funding method. In December 2003, at the direction in the Conference Report accompanying the Energy and Water Development Appropriations Act for Fiscal Year 2002, DOE sent the CFOs of 22 agencies information about the LDRD program and its inclusion in the indirect costs for R&D performed at DOE laboratories. Specifically, DOE provided each CFO office, with the exception of DHS, with a copy of the Secretary of Energy’s April 2002 memorandum, an explanation of how the LDRD program is funded, and a description of DOE’s notification process. However, DOE did not identify a point of contact within each agency’s Office of the CFO or provide the CFO’s room number, and senior officials in the CFO’s office at Transportation and NRC told us that they did not receive DOE’s information even though they were the appropriate point of contact. These officials commented on the LDRD program after we provided them with copies of the DOE materials. Similarly, research managers and/or contracting officers responsible for funding R&D at DOE’s contractor-operated laboratories for DOD, DHS, DOT, NASA, NIH, and NRC had differing levels of knowledge about how the LDRD program functioned and how it is funded. For example, the DOD, DHS, and NASA research managers we interviewed had detailed knowledge of the LDRD program. In contrast, research managers at DOT were less familiar with the LDRD program and how it is funded. They told us that this was mainly because the department funds relatively little R&D at the DOE laboratories and the decisions to use the DOE laboratories are made by the departmental agencies. Question 6: Please identify any instances when another federal agency has refused to pay the LDRD charge assessed by the DOE laboratories on work for other agencies, as well as any instances when the DOE laboratories have voluntarily waived assessment of the LDRD charge on funds received from another federal agency. None of the officials at the six agencies we contacted cited any instances when their agencies have refused to reimburse DOE for the LDRD charge or expressed concern about the LDRD expense. In June 1998, DOE and NIH signed a Memorandum of Understanding that clarified the terms and conditions of NIH grants awarded to DOE laboratories. Among other things, the Memorandum of Understanding states that (1) the DOE laboratory contractor may be the awardee organization, (2) DOE will waive its 3-percent administrative overhead rate, and (3) while NIH awards will not include an allowance for LDRD, the DOE laboratories may recover LDRD costs from the total funding included in grants awarded to DOE laboratory contractors. Cognizant officials at DOE and its laboratories told us that they are not aware of any instances in which a federal customer has objected to or stated that they would not reimburse DOE for the LDRD charge. The officials also did not identify any instances in which the DOE laboratories had not charged DOE for the LDRD portion of the work done on another agency’s behalf—either voluntarily or involuntarily. Managers at each of the nine DOE laboratories told us that their policy is to use the same indirect cost rate for all R&D and other operations performed at the laboratory. Question 7: On April 30, 2002, the Secretary of Energy issued revised LDRD guidance in response to direction provided in the Conference Report for the Energy and Water Development Appropriations for Fiscal Year 2002. Subsequently, DOE’s National Nuclear Security Administration (NNSA) and Office of Science issued more detailed guidance to their respective laboratories. What is the status of implementing the changes to the LDRD approval and reporting process as outlined in this guidance? Do these new procedures constitute a firewall between LDRD using defense appropriations and LDRD using nondefense appropriations, as some in DOE have claimed? DOE has implemented changes to the LDRD approval and reporting process as outlined in the Secretary’s memorandum and the NNSA and Office of Science guidance. These changes include having a DOE official review and concur on all LDRD projects prior to approval by laboratory directors and requiring DOE field officials associated with each laboratory to certify annually that LDRD projects benefit the programs of the sponsoring agencies. When approving these projects, DOE does not distinguish whether the projects benefit defense or nondefense activities because, in its view, LDRD projects are new concepts that may benefit more than one area and therefore cannot be categorized in this manner. DOE officials’ role in approving proposed LDRD projects is to ensure that the projects support DOE’s national security mission. However, as stated earlier, DOE’s annual report identifies the amounts of LDRD funding it receives from defense and nondefense sponsors and the amounts of LDRD funding that support projects expected to have primary benefit to defense or nondefense mission areas. Question 8: Are the laboratories supplementing their funds for LDRD with funds designated for the Strategic Initiative? None of the nine DOE laboratories has been supplementing funding for LDRD programs with other laboratory funds, such as Idaho National Engineering and Environmental Laboratory’s (INEEL) Strategic Initiative, according to officials of DOE’s Office of Inspector General; Office of Management Budget and Evaluation; Office of Science; NNSA; Office of Nuclear Energy, Science, and Technology; and the nine laboratories. As stated earlier, DOE’s Order 413.2A prohibits DOE’s laboratories from using LDRD funds on projects that will need additional non-LDRD funding to reach their goals. A May 2003 DOE Inspector General report cited possible misuse of INEEL’s Strategic Initiative Fund for LDRD projects. In response, DOE’s acting CFO conducted a review of the expenditures in question and determined that no funds were misused and INEEL had not exceeded its LDRD funding limit. The Inspector General accepted the CFO’s findings. Question 9: What does DOE do to ensure, in advance, that different laboratories do not undertake duplicative LDRD projects? What does DOE do to ensure that LDRD projects are not duplicative of research in other federal agencies or in universities? DOE and its laboratories rely on the scientists, who submit proposals; members of peer review committees; and laboratory managers to ensure that LDRD projects do not duplicate research at other laboratories or universities. According to officials at the four laboratories we visited, the chances for duplication among LDRD projects are remote for several reasons. First, the NNSA laboratories (Los Alamos, Lawrence Livermore, and Sandia) coordinate their work to ensure there is no duplication. Second, peer review groups consisting of laboratory, DOE, industry, and university representatives involve themselves in project management and try to eradicate duplication or other potential wastes of resources. Third, science is a very competitive field, and scientists have strong incentives to conduct original research and publish or present the results of that research. Finally, because basic science explores fundamental principles, scientists may be looking at the same issue, for example, techniques for sensing ever smaller amounts of an element, but for different reasons or with different approaches. In addition, our September 2001 report concluded that the LDRD project-selection and review processes that are in place at the nine DOE laboratories are adequate to reasonably ensure compliance with DOE’s project-selection guidelines. Question 10: To what extent does DOE believe that the LDRD program is still a necessary tool to recruit and retain scientists? Officials at NNSA laboratories told us that LDRD remains a necessary tool to recruit and retain top scientists because their program work provides little opportunity for basic scientific research. Similarly, INEEL officials told us that LDRD plays a major role in attracting and retaining the most qualified scientists and engineers at their laboratory. In comparison, officials at Office of Science laboratories believe that LDRD is important for recruiting and retaining scientists; however, they noted its role is less essential for their laboratories because they primarily perform basic research. NNSA laboratory managers told us that LDRD is an essential tool for recruiting and retaining scientists for several reasons. As a recruiting tool, the LDRD program is vital because the mission of the NNSA laboratories— to perform applied research to develop nuclear weapons technologies— does not readily attract qualified new hires. The LDRD program has served as a stepping stone for the NNSA laboratories to attract and hire many scientists by supporting from nearly one-half to two-thirds of the post doctoral researchers at the laboratories. For example, one of the three LDRD program components at Los Alamos National Laboratory makes awards to research proposals specifically targeted at post-doctoral candidates. As a result, 262 (61 percent) of the 427 post-doctoral scientists charged substantial amounts of time to LDRD. According to NNSA laboratory managers, post-doctoral scientists who work at their laboratories are more likely to seek permanent employment at the laboratory, and LDRD projects provide opportunities for laboratory managers to evaluate the post-doctoral scientists for future employment. In some cases, the LDRD program also provides meaningful work opportunities at the NNSA laboratories while newly hired scientists wait to receive their security clearances. In addition, the LDRD program provides opportunities for collaboration with universities and other research organizations, thereby providing a pipeline for new employees. As a retention tool, LDRD provides scientists with funding to perform basic and applied research on the cutting edge of their field, improve their technical skills, and make scientific contributions in their fields. INEEL managers told us that the LDRD program funded 55 percent of the post-doctoral candidates supported by the laboratory in fiscal year 2002. The managers attributed about 40 percent of the scientists and engineers hired at INEEL in the past 4 years to investments in LDRD. Managers at the five Office of Science laboratories told us that the LDRD program is important for their efforts to recruit and retain scientists. However, they noted that the LDRD program is less important to their laboratories than it is to the NNSA laboratories, because their laboratories mainly fund basic research. According to laboratories managers, it is basic research and the opportunity for technological advances—whether performed as LDRD or as program work—that attracts and maintains the interest of the top scientists. As a result, the Office of Science laboratories typically devote, at most, slightly over 4 percent of their R&D and other operating funds to LDRD each year and have substantially smaller LDRD programs than the NNSA laboratories. Question 11: How much has each of the nine DOE laboratories spent on LDRD from fiscal year 1998 through fiscal year 2003, and which federal agencies’ funds have been used and in what amounts? For the 6 years from fiscal year 1998 through fiscal year 2003, DOE’s nine laboratories spent a total of $1.8 billion, or an average of $296 million per year, on LDRD. In fiscal year 2003, the laboratories received $7.7 billion from DOE and other federal agencies, through reimbursement to DOE, and spent $347 million, or 4.5 percent, on LDRD. Los Alamos National Laboratory, Sandia National Laboratories, and Lawrence Livermore National Laboratory accounted for $257 million, or 74 percent, of the LDRD funds. DOE, DOD, and the intelligence agencies have been the primary sources of LDRD funding, accounting for 96 percent of the federal support in fiscal year 2003. Table 2 shows that the nine laboratories received $7.7 billion from DOE and other federal agencies for their R&D and other operating expenses in fiscal year 2003. Specifically, DOE and DOD provided $7.3 billion, or 96 percent, of the federal funding that the laboratories received. NIH, NRC, and NASA provided $190 million, or 2.5 percent, of the funding. DOT and DHS provided only $12.6 million and $9.4 million, respectively, for work at the DOE laboratories. Table 3 shows that, in fiscal year 2003, the nine DOE laboratories allocated to LDRD $347 million, or 4.5 percent, of the $7.7 billion they received from DOE and other federal sources, through reimbursement to DOE. Los Alamos National Laboratory, Sandia National Laboratories, and Lawrence Livermore National Laboratory accounted for $257 million, or 74 percent, of the $347 million. DOE’s appropriations accounted for $293 million, or 84 percent, of the LDRD funding from federal sources, while $54 million, or 16 percent, originated from other federal agencies, through reimbursement to DOE. DOD and the intelligence agencies accounted for $41 million, or 12 percent. NIH, NRC, and NASA together accounted for $7.5 million, or 2 percent. Appendix 1 provides data on each laboratory’s total R&D spending and LDRD spending for DOE and other federal agencies, through reimbursement to DOE, for fiscal years 1998 through 2001, and appendix II provides more detailed data on each laboratory’s total R&D spending and LDRD spending by subagency for fiscal years 2002 and 2003. The funding amounts for prior fiscal years are presented in fiscal year 2003 dollars. We provided DOE with a draft of this report for its review and comment. In written comments, DOE agreed with the report. (See app. III.) DOE also provided comments to improve the report’s technical accuracy, which we incorporated as appropriate. To assess DOE’s statutory authority for charging other federal agencies for LDRD, we researched and analyzed statutes and legislative histories and referred to principles of appropriations law. To identify laboratory-initiated research programs similar to LDRD at other federal agencies’ laboratories, we interviewed cognizant officials within DOD, DHS, DOT, NASA, NIH, and NRC. Through their payments to DOE, these federal agencies were among the primary sources of LDRD funding generated from R&D performed for non-DOE agencies from fiscal years 1998 through 2003. To examine DOE’s policies and procedures for ensuring that its laboratories spend LDRD funds in ways that benefit the requesting agencies’ programs and are consistent with their appropriation acts, we evaluated DOE’s implementing order and documents for the LDRD program and interviewed cognizant officials at DOE and obtained information from its nine contractor-operated laboratories regarding the actions they have taken to improve the program’s accountability. In addition, we contacted cognizant officials in the Office of the CFO and/or the Office of General Counsel in DOD, DOT, NASA, NIH, and NRC to determine whether the funding structure of the LDRD program presented issues for their compliance with statutory or appropriations requirements. These five agencies, through their reimbursements to DOE, were among the primary sources of LDRD funding at the nine DOE laboratories from fiscal years 1998 through 2003. We also contacted cognizant officials in the Office of the CFO and the Science and Technology Directorate in DHS because of its special relationship with DOE’s laboratories. To assess whether the LDRD program is a necessary tool for recruiting and retaining laboratory scientists, we obtained information from cognizant officials at each of DOE’s nine laboratories about the role that LDRD plays in recruiting and retaining scientists and obtained documentation. We also reviewed laboratories’ information on the participation of post-doctoral scientists and others in LDRD research. To provide data on the sources and amounts of LDRD funding, we obtained data from each laboratory on its operating and LDRD funds for fiscal year 1998 through fiscal year 2003. Specifically, the laboratories provided financial data for each of DOE’s major program budgets and for each federal agency that, in a given year, funded more than $1 million in R&D through DOE’s Work for Others program. Because the laboratories’ prior fiscal year data were in nominal dollars, we converted their current dollars to constant fiscal year 2003 dollars using deflators for nondefense from the Office of Management and Budget’s Budget of the United States Government, Fiscal Year 2005, Historical Tables. We also obtained from key database officials responses to a series of questions focused on data reliability covering issues such as data entry access, quality control procedures, and the accuracy and completeness of the data. Follow-up questions were added whenever necessary. In addition, we reviewed all data provided by the laboratories, investigated all instances where we had questions regarding issues such as categories or amounts, and made corrections as needed. Based on this work, we determined that the financial data provided were sufficiently reliable for the purposes of our report. We did not assess the reliability of the fiscal year 1992 LDRD funding total, which was used for background purposes only. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the Secretary of Energy, the Director of the Office of Management and Budget, and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841. Key contributors to this report were Richard Cheston, Carol Kolarik, Daren Sweeney, Doreen Feldman, and Hannah Laufe. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The Department of Energy's (DOE) contractor-operated laboratories perform mission-related research and development (R&D) for DOE and other federal agencies. In 1992, DOE established the Laboratory- Directed Research and Development (LDRD) program, under which laboratory directors may allocate funding to scientists to conduct worthy independent research. DOE allows participating laboratories to support their LDRD programs by including a charge of up to 6 percent of the total project cost in the indirect costs for R&D performed for DOE and other federal agencies. GAO was asked to address 11 specific questions on DOE's LDRD program regarding: DOE's statutory authority for charging other federal agencies for LDRD, DOE's policies and procedures for ensuring departmental compliance with statutory requirements and committee report direction, the extent to which DOE believes the LDRD program is a necessary tool for recruiting and retaining laboratory scientists, and the sources and amounts of LDRD funding that each laboratory received from fiscal year 1998 through fiscal year 2003. In commenting on the draft report, DOE agreed with its factual accuracy. By law, when DOE conducts R&D for other federal agencies and uses a laboratory contractor to carry out the tasks, DOE must recover from the other agency all costs, including LDRD, DOE owes its contractor in performing the work. DOE has issued a departmental order and clarifying memoranda and guidance to ensure LDRD program compliance with statutory requirements and congressional direction. For example, the Secretary of Energy's April 2002 guidance requires that agencies funding work at its laboratories be notified about the LDRD program, including the laboratory's indirect-cost rate and an estimate of the associated cost. According to senior budget, legal, and research program officials at six federal agencies that fund work at the DOE laboratories, inclusion of funding for the LDRD program as an indirect cost does not limit their agency's ability to comply with statutory or appropriations requirements. Managers at the four DOE laboratories that primarily conduct nuclear weapons and environmental management R&D told us that LDRD is vital for recruiting and retaining top scientists, while managers at the five Office of Science laboratories said that LDRD plays an important, but less vital, role in recruiting and retaining top scientists. From fiscal year 1998 through fiscal year 2003, DOE's contractor-operated laboratories spent a total of $1.8 billion, or an average of $296 million per year, on LDRD. DOE accounted for 84 percent and the Department of Defense and the intelligence agencies, through their payments to DOE, accounted for 12 percent of the federal support for the LDRD program in fiscal year 2003.
DOD’s primary military medical mission is to maintain the health of 1.7 million active duty service personnel and to be prepared to deliver health care during times of war. Also, as an employer, DOD offers health care services to 6.6 million non-active duty beneficiaries such as dependents of active duty personnel and military retirees. The bulk of the health care is provided at more than 600 military hospitals and clinics worldwide, which are operated by the Army, Navy, and Air Force. DOD’s direct health care system is supplemented by a DOD-administered insurance-like program called the Civilian Health and Medical Program of the Uniformed Services (CHAMPUS). In fiscal year 1996, DOD expects to spend about $11.8 billion providing care directly to its beneficiaries and about $3.6 billion for CHAMPUS. In response to such challenges as increasing health care costs and uneven access to care, in the late 1980s DOD initiated, under congressional authority, a series of demonstration programs to evaluate alternative health care delivery approaches. In the National Defense Authorization Act for Fiscal Year 1994 (P.L. 103-160), the Congress directed DOD to prescribe and implement, to the maximum extent practicable, a nationwide managed health care benefit program modeled on health maintenance organization (HMO) plans. The Congress specifically required that this new program could not incur costs greater than DOD would incur in the program’s absence and that beneficiaries enrolling in the managed care program would have reduced out-of-pocket costs. Drawing from its experience with the demonstration projects, DOD designed TRICARE as its managed health care program. TRICARE is designed to give beneficiaries a choice among TRICARE Prime, which is similar to an HMO; TRICARE Extra, which is similar to a preferred provider organization; and TRICARE Standard, which is the current CHAMPUS fee-for-service-type benefit. Beneficiaries who select TRICARE Prime must enroll to receive care under this option. The program uses regional managed care support contracts to augment the capabilities of military hospitals by having contractors perform some managed care functions as well as arrange for care in the civilian sector. There will be seven managed care support contracts covering the 12 TRICARE regions. To coordinate the services and the contractors and monitor health care delivery, each region is headed by a joint-service administrative organization called a lead agent. DOD has estimated that the managed care support contracts will cost about $17 billion over the 5-year contract period. DOD has awarded four contracts and plans to have all contracts awarded and the TRICARE program fully implemented by September 1997. Background on the TRICARE program is in appendix I. The Northwest Region was the first region to begin enrolling beneficiaries in March 1995. Three regions, the Golden Gate Region, the Hawaii-Pacific Region, and Region Nine, began enrolling beneficiaries in October 1995, followed by the Southwest Region in November 1995. While the contract has been awarded for the Southeast and Gulf South Regions, they are not scheduled to begin health care delivery under TRICARE until July 1996. Figure 1 shows the DOD regions covered by the seven managed care support contracts. The shaded areas are the regions where TRICARE has been implemented in various stages as of March 1996. DOD has experienced difficulties in awarding its managed care support contracts. Each of the contracts awarded thus far has been protested. The protest of the first contract, encompassing the Golden Gate Region, Hawaii-Pacific Region, and Region Nine, was sustained, and DOD was required to recompete the contract. The protests for the Northwest Region’s and Southwest Region’s contracts and the contract including both the Southeast and Gulf South Regions were denied. Last year, in response to congressional concerns about DOD’s difficulties with an early contract award covering California and Hawaii for which GAOsustained a protest, we reviewed problems identified by the bid protest experience. We reported that while DOD had taken steps to improve future contract awards, several areas of concern remained. Among our recommendations—which DOD agreed to adopt—were that DOD consider the potential effects on competition of such large TRICARE contracts and weigh alternative award approaches to help ensure competition during the next procurement round. We also urged that DOD try to simplify the next round’s solicitation requirements and seek to incorporate best-practice, managed care techniques in the contracts. We further recommended that DOD establish general qualification requirements for its board members who evaluate contractors’ proposals. We plan to follow up on these issues and begin a study of how well DOD’s contractors are performing under the current contracts. Despite unanticipated obstacles, DOD’s early implementation of TRICARE is progressing in line with DOD expectations. DOD has enrolled large numbers of beneficiaries in TRICARE Prime, including many of the active duty dependents DOD particularly wants to enroll. It has also succeeded in encouraging TRICARE Prime enrollees to select military health care providers—the source of care that DOD believes is more cost-effective than civilian-provided care. In addition, DOD is addressing implementation problems that early on caused confusion for beneficiaries and difficulties for military health care managers. As DOD intended through its marketing efforts, many beneficiaries have enrolled in TRICARE Prime, particularly the target population of active duty dependents that tends to rely heavily on the DOD health care system. As of January 31—after almost 12 months of operation in the Northwest Region and fewer than 4 months in four other regions—more than 400,000 people had enrolled in TRICARE Prime. In the Northwest Region, about two-thirds of active duty dependents have chosen this option, as shown in figure 2. Also, in those regions under way, the bulk of beneficiaries choosing TRICARE Prime have enrolled with military, rather than civilian, health care providers. This enhances DOD’s goal of fully utilizing its military medical facilities and providing care in the less expensive military setting. Figure 3 shows that in the Northwest Region, over two-thirds of the beneficiaries have chosen to enroll with a military health care provider. During the period from the contract award through the start of health care delivery, DOD encountered and addressed various start-up problems. A delay in the TRICARE benefits package and higher than expected early enrollment together led to initial beneficiary confusion. Also, computer system problems have hindered DOD’s ability to manage the enrollment process. One early setback was the delay in the approval of the TRICARE benefits package, which details the beneficiaries’ fees and copayments for health care services. DOD did not approve the benefits package until just 2 months before the Northwest Region began enrolling beneficiaries. Military facilities had already begun their marketing and education efforts with the proposed benefits; however, the approved benefits package changed the enrollment fees. Because of this, people became confused, and DOD and the contractor had to explain the changes. This confusion did not occur in other regions, because the TRICARE benefits package was in place before marketing and education began. Despite the benefits package delay, the Northwest Region had more people wanting to enroll than it anticipated. Although the contractor had projected that 28,000 beneficiaries would enroll during the first year, approximately 58,000 beneficiaries enrolled during the first 4 months. The contractor responsible for managing the enrollment process was understaffed and had to hire temporary employees. The temporary employees were not adequately trained and could not sufficiently address beneficiaries’ questions about TRICARE, which further confused beneficiaries. Later, DOD and the Northwest Region shared their experiences through an extensive lessons-learned effort with other regions. Thus, the Southwest Region contractor hired temporary employees and trained them with its regular employees before enrollment began. Although the Southwest Region also experienced higher enrollment than anticipated, DOD and the contractor avoided much of the beneficiary confusion that the Northwest Region experienced. During the enrollment process, DOD has also encountered problems stemming from the inability of its medical information system to interact with the contractors’ systems. Because of their configurations, the systems cannot communicate, meaning that data cannot be transferred from one system to another. As a result, according to lead agent officials, DOD does not have a complete database of all beneficiaries enrolled in TRICARE Prime, and regional officials must rely on the contractor to provide enrollment data. However, DOD is addressing the problem by having the Northwest contractor provide special reports from its system and, in the Southwest Region, having the contractor put beneficiary enrollment data in both the DOD and contractor systems. DOD plans to address this problem by amending the contracts to require contractors’ medical information systems to exchange information with DOD’s system. The degree to which cost savings can be achieved under TRICARE remains uncertain and depends on DOD’s ability to operate the system as it is designed to work. Issues have emerged during early implementation that may hinder DOD’s efforts to contain costs. TRICARE depends on managed care to achieve maximum efficiency of its military facilities and control rising health care costs by using techniques such as sharing resources with the support contractor and managing beneficiaries’ use of health care services. DOD has estimated that resource sharing could save $810 million over 5 years, but DOD and contractor officials responsible for entering into specific resource-sharing agreements have told us they do not fully understand the potential cost implications of such agreements. This lack of understanding continues to impede implementation of resource sharing under TRICARE, and the effectiveness of the program remains uncertain. Resource sharing is a feature of the TRICARE contracts that allows the contractor, through agreements with DOD, to provide personnel, equipment, and/or supplies to a military facility to improve its capability to provide care. DOD officials believe that providing health care to military beneficiaries in military facilities is less expensive than comparable care in the civilian sector, so maximizing the use of military facilities results in savings to both DOD and the contractor. For example, the contractor might provide an anesthesiologist to a military hospital so that more surgeries could be performed there rather than at a more costly private facility at DOD expense, thereby reducing overall costs. Similarly, contractor costs for the service provided are reduced by using the military facility and supporting resources. Evaluating the cost-effectiveness of resource-sharing agreements is very difficult and complex. Each agreement must be analyzed to determine whether the savings from providing care in the military facility offset increased facility costs under the agreement, such as the cost of supplies, staff, or support services that would not have been used if the agreement had not been established. Also, the extent of resource-sharing savings will be a factor in future regional contract price adjustments, which further adds to the complexity of these agreements. DOD has given regional officials, military facility commanders, and contractors a financial analysis worksheet to help determine the cost-effectiveness of the agreements. DOD has also provided some training sessions in the regions. Despite these efforts, DOD and contractor officials remain confused about making appropriate decisions regarding the financial implications of these agreements. According to lead agent officials, they are uncertain about how individual agreements may affect future contract price adjustments. Because of this, some regions have been slow to enter into agreements, and the anticipated savings may not be achieved. DOD officials told us that they recognize this deficiency and plan to address it. They said that DOD is currently developing a formal training program for resource sharing and that they also plan to provide military treatment facility commanders with a new computer-based analytical tool to enable them to determine the potential effects of resource-sharing agreements. There is, however, a more direct, less confusing means to accomplish contractor support of direct care in military facilities. Using a different program called task order resource support, military facility commanders can contract separately with the managed care support contractor for particular resources to augment their direct care capabilities. DOD officials told us that, in the past, very little resource support has taken place because hospital commanders did not have the level of control over CHAMPUS funds they needed to enter into these agreements. Now, however, DOD has proposed an alternative financing mechanism for the managed care support contracts. If adopted, this financing method would give facility commanders more control of CHAMPUS funds along with their direct care funds and, therefore, more flexibility to enter into resource support agreements. With this flexibility, DOD managers would be able to directly buy the services they need to avoid sending some patients out of their hospitals for needed care. This may have the effect of reducing the need to negotiate the more complex resource-sharing agreements while still making the most of contractor support of military facility capabilities. DOD’s alternative financing approach is still being developed, however, so its eventual impact on contractor support of military direct care capabilities is still unclear. DOD estimated that utilization management in its facilities could save over $480 million nationwide over 5 years. However, DOD and the contractor were not ready to perform this function at the start of health care delivery in the Northwest and Southwest Regions as planned. Therefore, the full extent of TRICARE savings from utilization management may not be realized. Utilization management is intended to ensure that beneficiaries receive necessary and appropriate care in the most cost-effective manner. For example, utilization management reviews would verify that hospital admissions are medically necessary before patients check in or that lengths of hospital stays are not excessive. Utilization management also includes case management, which involves assigning health care providers to manage care for patients with high-cost, chronic conditions (such as diabetes or asthma) to try to avoid costly and disruptive crises that lead to emergency room visits or unscheduled hospital admissions. Utilization management can be done internally by the military facilities, or the contract can be written so that the contractor is required to perform this function. In the Southwest Region, where the contractor is responsible for utilization management, regional officials have expressed dissatisfaction with the contractor’s performance of utilization management activities and have withheld partial contract payments until the contractor’s performance improves. Because the contractor has hired additional utilization management staff, both DOD and the contractor believe the situation will be resolved soon. The Northwest Region’s utilization management program, which is handled by the military, was not implemented for over 5 months, but it is now under way. Because of TRICARE’s newness, size, and complexity, appropriate and effective information management has become increasingly important. During early TRICARE implementation, DOD did not define performance measures to evaluate how well it is meeting its goals, but DOD is now defining such measures at the national and regional levels. However, some data needed to evaluate the program are not being captured. Before TRICARE’s implementation, DOD had not defined performance measures needed to monitor and evaluate all major aspects of health care delivery at both the regional and national levels. During implementation, the regional officials quickly recognized the importance of having such measures for evaluating achievement of regional and national TRICARE goals, and for providing a good information base for management decisions. Thus, the regions have begun creating their own sets of measures to assess the efficiency and effectiveness of the delivery of health care services in the region. These measures will be used in an ongoing evaluation of customer services, including patient satisfaction, and clinical services, including inpatient and outpatient care, disease prevention and health screening, disease management, enrollee health, and population health management. DOD is separately developing a set of performance measures to be used at the headquarters level to monitor various aspects of health care delivery across the regions, such as TRICARE Prime enrollment and preventable admissions. DOD officials said the identification of performance measures will be a continuing effort for all health care stakeholders as DOD’s needs change throughout TRICARE implementation. However, the appropriateness and effectiveness of these performance measures remain to be seen. Currently, neither DOD nor the contractors are tracking access data to ensure that they are meeting DOD’s standards for access to primary care services. However, these data are needed to enable the Congress and DOD to measure TRICARE’s performance against this key system goal. Access to care relates to a patient’s ability to get the appropriate level of health care in a timely manner. Timely access to military health care has long been a major source of beneficiary dissatisfaction. To improve performance in this area, DOD established primary care access standards in their 1994 TRICARE Policy Guidelines. These standards apply to both military and civilian providers and address areas such as wait times for appointments and the availability of emergency services. The following are DOD’s current access standards for maximum appointment wait times: 4 weeks for a well visit, which is nonurgent care for health maintenance 1 week for a routine visit, which is nonurgent care requiring a health care 1 day for acute illness care, which is urgent care requiring a health care provider. DOD collects some access data through an annual beneficiary satisfaction survey. The DOD survey contains 25 questions that look at how easily beneficiaries entered the health care system and whether they received the care they believed was necessary. Types of questions include where care was received, types of preventive services received, the number of calls made for an appointment, usual length of time between scheduling the appointment and seeing a provider, usual length of wait in the provider’s office, approximate travel time from residence to provider’s office, and beneficiaries’ general level of satisfaction with access to care. Although important, these survey data are based on beneficiaries’ perceptions generalized over a 12-month period and do not measure DOD’s actual performance against its newly established standards. DOD could collect the access data needed to measure its performance at the time beneficiaries schedule their primary care appointments. According to lead agent and Health Affairs officials, they are currently not doing so because DOD’s patient appointment and scheduling system, as configured, does not capture this information. DOD officials told us that the needed access data could likely be gathered by modifying the DOD appointment and scheduling system to capture precise waiting time information while still complementing these empirical data with the annual survey data. DOD also is not collecting the enrollment data needed to identify eligible beneficiaries who enroll in TRICARE but have not previously been users of the military health care system. Identifying beneficiaries attracted to military care by the TRICARE program is crucial to DOD’s ability to contain health care costs because, as the Congressional Budget Office estimates, this population accounts for about 25 percent of DOD’s 8.3 million beneficiaries. Each of these current nonusers who chooses TRICARE Prime adds to the overall cost of military health care. Although DOD believes that the impact of such enrollment will be lessened because of the annual enrollment fee and through targeted marketing to current system users, DOD officials told us that TRICARE Prime’s generous benefits will entice some nonusers to enroll, and that data on such enrollment are needed. However, DOD has not yet developed a definition that will enable it to identify these enrollees. DOD officials at both the national and regional levels told us that defining the various types of former nonusers, though necessary, is difficult because beneficiaries rely on the military health care system in varying degrees. For example, some beneficiaries have other health insurance but continue to use the military pharmacies. Also, some beneficiaries may begin to use military health care for reasons other than the TRICARE reforms, such as the loss of other health insurance. Once DOD has a working definition of this population of former nonusers, it can seek to ensure that appropriate data are being captured to identify these beneficiaries. DOD officials told us that the collection of such data should be done through a set of questions consistently administered to enrollees across the regions. By gathering this information, DOD could better evaluate the impact of this enrollment on TRICARE’s costs. Ultimately, DOD needs these data to reassess TRICARE’s cost-sharing structure as it works to contain overall health care costs while maintaining fees for beneficiaries that are neither too high nor too low. Despite initial beneficiary confusion caused by marketing and education problems, as well as problems with computer systems’ compatibility, early implementation of TRICARE is progressing consistent with congressional and DOD goals. However, the success of DOD’s current efforts to address the implementation of resource-sharing agreements and utilization management is critical to containing health care costs. DOD also needs to gather certain enrollment and performance data so that it and the Congress can assess TRICARE’s success in the future. We recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Health Affairs to collect data on the timeliness of appointments in order to measure TRICARE’s performance in improving beneficiary access against DOD’s standards and assess the impact of new beneficiaries who would not be using military health care if not for TRICARE, by defining these new users, identifying them, and estimating the cost implications of their use of military health care. In a letter dated May 15, 1996, commenting on a draft of this report, the Director of TRICARE Operations Policy wrote that DOD fully agreed with the report and with both of our recommendations. Regarding our recommendation concerning DOD’s need to collect data on the timeliness of appointments, the Director said that DOD already identifies the time between when an appointment is made and the actual appointment. However, in order to gather access data more precisely and completely, DOD plans to make computer system modifications during fiscal year 1997. The Director also wrote that DOD strongly believes that access data should continue to be collected through surveys of beneficiaries. As stated in the report, we agree that both types of access-to-care information are important. We believe that DOD’s plans for collecting access data, if implemented properly, should be sufficient to measure TRICARE’s success against DOD’s standards. Regarding our recommendation that DOD assess the cost implications of TRICARE enrollment by beneficiaries who would not otherwise be using military health care, the Director commented that DOD has taken several steps to minimize such enrollments, including designing TRICARE’s cost-sharing structure and targeting marketing to current military medical system users. While we agree that cost sharing and enrollment targeting will deter some from enrolling in TRICARE, the program is still attractive to beneficiaries who would not otherwise be using military health care. The Director also said that DOD is enhancing a computer information system that will allow it to track the extent that enrollees have other health insurance, which, in concert with the beneficiary survey data, should help DOD assess the impact of beneficiaries who would not be using military health care if not for TRICARE. DOD officials also suggested several technical changes to the report that we incorporated as appropriate. We are sending copies of this report to the Secretary of Defense and will make copies available to others upon request. Please contact me at (202) 512-7111 or Dan Brier, Assistant Director, at (202) 512-6803 if you or your staff have any questions concerning this report. Other major contributors are Allan Richardson, Evaluator-in-Charge, Bonnie Anderson, Sylvia Jones, and David Lewis. TRICARE is intended to ensure a high-quality, consistent health care benefit, preserve choice of health care providers for beneficiaries, improve access to care, and contain health care costs. TRICARE features a triple-option benefit. The first option, TRICARE Standard, mirrors the current fee-for-service CHAMPUS program. The second option is TRICARE Extra, a preferred provider option through which beneficiaries receive a 5-percent discount on the Standard option when they choose among a specified network of providers. The third option, TRICARE Prime, represents the greatest change to defense health care delivery. TRICARE Prime is an HMO alternative and is the only option that requires beneficiaries to enroll. To implement and administer the TRICARE program, DOD has reorganized the military health care system into 12 new, joint-service regions. DOD created the position of lead agent for each region to coordinate among the three services and the contractor and to monitor the delivery of health care. The lead agent is a designated military medical facility commander supported by a joint-service staff. Table I.1 presents information on the 12 TRICARE regions, including the designated lead agents, the states included in the regional boundaries, and the number of military medical facilities in each region. National Capital (Bethesda, Walter Reed, and Malcolm Grow Medical Centers) TRICARE uses contracted civilian health care providers to supplement the care provided by the defense health care system on a regional basis—a significant feature maintained from earlier demonstration programs. The managed care support contractors’ responsibilities include developing networks of civilian providers, locating providers for beneficiaries, performing utilization management functions, processing claims, and providing beneficiary support functions. Seven contracts will be awarded to civilian health care companies covering the 12 TRICARE health care regions. Table I.2 describes the status of contract awards and start dates for health care delivery. Between the contract award date and the health care delivery start date is a 6- to 8-month transition period for both DOD and the contractor. During this time, the contractor performs tasks such as the establishment of provider networks and beneficiary support functions. Both the contractor and DOD begin some early marketing and education of beneficiaries and providers. Enrollment of all eligible non-active duty beneficiaries begins either during the transition phase or at the start of health care delivery. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) implementation of its TRICARE managed health care program, focusing on: (1) whether early implementation produced the expected results; (2) how early outcomes may affect costs; and (3) whether DOD is capturing data needed to manage and assess TRICARE performance. GAO found that: (1) early implementation of TRICARE has resulted in large numbers of beneficiaries enrolling in TRICARE Prime, which DOD believes is cost-effective; (2) DOD has encountered many start-up problems, such as a delay in the TRICARE benefits package, higher than expected early enrollment, and computer systems' incompatibility; (3) DOD and TRICARE contractors have diligently addressed their start-up problems and have disseminated lessons learned from those problems; (4) DOD efforts to contain TRICARE costs may be hindered by uncertainties regarding resource-sharing arrangements and utilization management problems; (5) DOD is exploring the use of task order resource support as an alternative to resource sharing arrangements and giving hospital commanders more control over dependent-care funds to give military hospitals more flexibility in obtaining support services from TRICARE contractors; (6) DOD delayed implementing utilization management because it was not ready to perform this function in the northwest and southwest regions as planned; and (7) although DOD is defining TRICARE performance measures, it is not collecting key data on beneficiaries' access to care or the enrollment of former nonusers who are eligible to use the military health care system.
The plaintiffs in the Olmstead case were two women with developmental disabilities and mental illness who claimed that Georgia was violating title II of the ADA, which prohibits discrimination against people with disabilities in the provision of public services. Both women were being treated as inpatients in a state psychiatric hospital. The women and their treating physicians agreed that a community-based setting would be appropriate for their needs. The Supreme Court held that it was discriminatory for the plaintiffs to remain institutionalized when a qualified state professional had approved community placement, the women were not opposed to such a placement, and the state could reasonably accommodate the placement, taking into account its resources and the needs of other state residents with mental disabilities. The Olmstead decision is an interpretation of public entities’ obligations under title II of the ADA. As one of several federal civil rights statutes, the ADA provides broad nondiscrimination protection for individuals with disabilities in employment, public services, public accommodations, transportation, and telecommunications. Specifically, title II of the ADA applies to public services furnished by governmental agencies and provides in part that “no qualified individual with a disability shall, by reason of such disability, be excluded from participation in or be denied the benefits of the services, programs, or activities of a public entity, or be subjected to discrimination by any such entity.” Two ADA implementing regulations were key in the Supreme Court’s ruling in Olmstead. The first requires that public entities make “reasonable modifications” when necessary to avoid discrimination on the basis of disability, unless the entity can demonstrate that the modification would “fundamentally alter the nature of the service, program or activity.” The second requires public entities to provide services in “the most integrated setting appropriate to the needs of qualified individuals with disabilities.”That setting could be in the community, such as a person’s home, or in an institution, depending on the needs of the individual. For example, professionals might agree that a nursing home is the most integrated setting appropriate for an institutionalized person’s needs. In Olmstead, physicians at the state hospital had determined that services in a community-based setting were appropriate for the plaintiffs. The Supreme Court recognized, however, that the appropriate setting for services is determined on a case-by-case basis and that the state must continue to provide a range of services for people with different types of disabilities. The ADA has a broad scope in that it applies to individuals of all disabilities and ages. The definition of disability under the ADA is a physical or mental impairment that is serious enough to limit a major life activity, such as caring for oneself, walking, seeing, hearing, speaking, breathing, working, performing manual tasks, or learning. The breadth of this definition thus covers people with very diverse disabilities and needs for assistance. For some individuals with disabilities, assistance from another person is necessary—direct, “hands-on” assistance or supervision to ensure that everyday activities are performed in a safe, consistent, and appropriate manner. For others, special equipment or training may enable them to continue to function independently. Disability may be present from an early age, as is the case for individuals with mental retardation or developmental disabilities; occur as the result of a disease or traumatic injury; or manifest itself as a part of a natural aging process. Moreover, the assistance needed depends on the type of disability. For example, individuals with physical disabilities often require significant help with daily activities of self-care. In contrast, individuals with Alzheimer’s disease or chronic mental illness may be able to perform everyday tasks and may need supervision rather than hands-on assistance. To be a “qualified” individual with a disability under title II of the ADA, the person must meet the eligibility requirements for receipt of services from a public entity or for participation in a public program, activity, or service—such as the income and asset limitations established for eligibility in the Medicaid program. The breadth of the disabled population to whom Olmstead may eventually apply is uncertain. Much is unknown about the widely varying population of people with disabilities, the settings in which they are receiving services, and the extent to which their conditions would put them at risk of institutionalization. Demographic data show, however, that the response to Olmstead will take place in the context of significant increases in the number of people with disabilities. As the baby boom generation grows older, they are more likely to be affected by disabling conditions. Of the many public programs that support people with disabilities, the federal-state Medicaid program plays the most dominant role for supporting long-term care needs. Services through this program have been provided primarily in institutional long-term care settings, but a growing proportion of Medicaid long-term care expenses in the past decade has been for home and community-based services. At present, however, there are wide differences between states in the degree to which home and community-based services are provided. States also face varying challenges in supporting community living beyond what can be provided through long-term care programs, such as ensuring adequate supports for housing and transportation, and maintaining adequate programs to ensure quality care is provided in community settings. The Olmstead decision has been widely interpreted to apply to people with varying types of disabilities who are either in institutions or at risk of institutionalization. One reason for the uncertainty about how many may be affected is that, as the decision recognized, the appropriateness of a person’s being placed in an institution or receiving home or community- based services would depend in part on the person’s wishes and the recommendations of his or her treatment professionals. Another reason is that information on the number of people with disabilities who are at risk of institutionalization is difficult to establish. Number of institutionalized individuals. On the basis of information from different sources, we estimate that the total number of people with disabilities who are being served in different types of institutional settings is at least 1.8 million. This figure includes about 1.6 million people in nursing facilities, 106,000 in institutions for the mentally retarded or developmentally disabled, and 57,000 in state and county facilities for the mentally ill. Number at risk of institutionalization. The number of people who are living in the community but at risk of institutionalization is difficult to establish. In an earlier study we estimated that, nationwide, 2.3 million adults of all ages lived in home or community-based settings and required considerable help from another person to perform two or more self-care activities. More difficult to estimate is the number of disabled children at risk of institutionalization. The demographics associated with the increasing number of aging baby boomers will likely drive the increased demand for services in a wide range of long-term care settings. Although a chronic physical or mental disability may occur at any age, the older an individual becomes, the more likely a person will develop disabling conditions. For example, less than 4 percent of children under 15 years old have a severe disability, compared with 58 percent of those 80 years and older. The baby boom generation— those born between 1946 and 1964—will contribute significantly to the growth in the number of elderly individuals with disabilities who need long-term care and to the amount of resources required to pay for it. The oldest baby boomers, now in their fifties, will turn 65 in 2011. In 2000, about 13 percent of our nation’s population was composed of individuals aged 65 or older. By 2020, that percentage will increase by nearly one-third to about 17 percent—one in six Americans—and will represent nearly 20 million more seniors than there are today. By 2040, the number of seniors aged 85 and older will more than triple to 14 million (see fig. 1). However, because older people are healthier now than in the past, no consensus exists on the extent to which the growing elderly population will increase the number of disabled elderly people needing long-term care. Projections of the number of disabled elderly individuals who will need care range between 2 and 4 times the current number. The changing demographics will also likely affect the demand for paid long-term care services. An estimated 60 percent of the disabled elderly living in communities now rely exclusively on their families and other unpaid sources for their care. Because of factors such as the greater geographic dispersion of families and the large and growing percentage of women who work outside the home, many baby boomers may have no option but to rely on paid long-term care providers. A smaller proportion of this generation in the future may have a spouse or adult children to provide unpaid care and therefore may have to rely on more formal or public services. Medicaid is by far the largest public program supporting long-term care.States administer this joint federal-state health financing program for low- income people within broad federal requirements and with oversight from the Centers for Medicare and Medicaid Services (CMS), the agency that administers the program at the federal level. In 2000, Medicaid long-term care expenditures represented over one-third of the total $194 billion spent by Medicaid for all medical services. Although at least 70 different federal programs provide assistance to individuals with disabilities at substantial cost, Medicaid is the most significant source of federal funds for providing long-term care. Earlier this year, we reported that Medicaid paid nearly 44 percent of the $134 billion spent nationwide for long-term care in 1999, including postacute and chronic care in nursing homes and home and community-based care. Individuals needing care, and their families, paid for almost 25 percent of these expenditures out-of-pocket. Medicare and other public programs covered almost 17 percent, and private insurance and other private sources (including long-term care insurance as well as services paid by traditional health insurance) accounted for the remaining 15 percent. (See fig. 2.) These amounts, however, do not include the many hidden costs of long-term care. For example, they do not include wages lost when an unpaid family caregiver takes time off from work to provide assistance. Historically, Medicaid long-term care expenditures have financed services delivered in nursing homes or other institutions, but the proportion of spending directed to home and community-based care has increased steadily over the past decade, as shown in figure 3. Federal and state Medicaid spending on home and community-based services was about $18 billion (27 percent) of the $68 billion spent on long-term care in fiscal year 2000. Much of the Medicaid coverage of home and community-based services is at each state’s discretion. One type of coverage, however, is not optional: states are required to cover home health services for medically necessary care (see table 1). A second type of services, called personal care, is optional. The primary means by which states provide home and community-based services is through another optional approach: home and community-based services (HCBS) waivers, which are set forth at section 1915(c) of the Social Security Act. States apply to the federal government for these waivers, which, if approved, allow states to limit the availability of services geographically, target specific populations or conditions, control the number of individuals served, and cap overall expenditures. To receive such a waiver, states must demonstrate that the cost of the services to be provided under a waiver (plus other state Medicaid services) is no more than what would have been spent on institutional care (plus any other Medicaid services provided to institutionalized individuals). States often operate several different waivers serving different population groups, and they have often limited the size and scope of the waivers to help target their Medicaid resources and control spending. While expenditures for these services have generally grown over time, states’ use of HCBS waivers to provide services in community settings has grown at the highest rate. Expenditures for services provided under HCBS waivers grew at an average annual rate of 28 percent between 1988 and 2000—twice as much as Medicaid’s expenditures for home health services and three times as much as for personal care services. Expenditures under the HCBS waivers vary widely with the type of disability covered. The average cost across all programs in 1999 was about $15,331 per recipient. For persons with developmental disabilities, the average cost was twice the average ($30,421); for programs serving the aged and aged disabled, the average cost was much lower ($5,849). This variation results from several factors, but primarily from differences in the type and amount of program services supplied versus services from other sources such as family members. The average costs for providing waiver and other home and community-based services is much lower than average costs for institutionalizing a person. However, the costs of these community-based services do not include significant other costs that must be covered when a person lives in his or her home or in a community- based setting, such as costs for housing, meals, and transportation, as well as the additional costs and burden for family and other informal caregivers. The proportion of Medicaid long-term care spending devoted to home and community-based services varies widely among states. Some states have taken advantage of Medicaid HCBS waivers to develop extensive home and community-based services, while other states have traditionally relied more heavily on institutional and nursing facility services. This variation is reflected in differences in the extent of states’ total Medicaid long-term care spending devoted to home and community-based care (defined to include the waivers, home health, and personal care services). For example, in 1999, 9 states devoted 40 percent or more of Medicaid long- term care expenditures to community-based care, whereas 11 states and the District of Columbia devoted less than 20 percent. (See fig. 4.) States also vary in the amount of home and community-based services they offer specifically through HCBS waivers. According to data compiled by researchers, an estimated 688,000 disabled persons were being served under 212 HCBS waivers in 49 states (excluding Arizona) and the District of Columbia in 1999. (See app. I.) These waivers covered several different types of disabled populations and settings. All but two states operated at least one waiver covering services for people with mental retardation or developmental disabilities, and all but the District of Columbia operated at least one waiver for the aged disabled. Overall, states had 73 waivers covering services for people with mental retardation or developmental disabilities serving nearly 260,000 participants, 65 waivers covering services for almost 382,000 aged or aged disabled participants, and 27 waivers serving about 25,000 physically disabled individuals. Nationwide, the number of people served by waivers varies substantially across states. Oregon, for example, served more than 8 times as many people per capita in its large waiver for the aged and disabled, compared with several other states that had waivers for the same target population. In most states, the demand for HCBS waiver services has exceeded what is available and has resulted in waiting lists. Waiting list data, however, are incomplete and inconsistent. States are not required to keep waiting lists, and not all do so. Among states that keep waiting lists, criteria for inclusion on the lists vary. In one 1998-99 telephone survey of 50 states and the District of Columbia, Medicaid officials in 42 states reported waiting lists for one or more of their waivers, although they often lacked exact numbers. Officials in only eight states reported that they considered their waiver capacity and funding to be adequate and that they did not have waiting lists for persons eligible for services under those waivers. The states face a number of challenges in providing services to support people with disabilities living in the community, and these challenges extend beyond what can be provided by the Medicaid program alone. The additional costs to the states of supporting people with disabilities in the community are a concern. For example, Medicaid does not pay for housing or meals for individuals who are receiving long-term care services in their own homes or in a community setting, such as an adult foster home. Consequently, a number of state agencies may need to coordinate the delivery and funding of such costly supports as housing and transportation. States may also find their efforts to move people out of institutions complicated by the scarcity of caregivers—both paid personal attendants and unpaid family members and friends—who are needed to provide the home and community services. Finally, there are concerns about the difficulty of establishing adequate programs to ensure that quality care is being provided in the different types of noninstitutional service settings throughout the community. We have reported on quality-of-care and consumer protection issues in assisted living facilities, an increasingly popular long-term care option in the community. States have the primary responsibility for the oversight of care furnished in assisted living facilities, and they generally approach this responsibility through state licensing requirements and routine compliance inspections. However, the licensing standards, as well as the frequency and content of the periodic inspections, are not uniform across the states. In our sample of more that 750 assisted living facilities in four states, the states cited more than 25 percent of the facilities for five or more quality-of-care or consumer protection problems during 1996 and 1997. Frequently identified problems included facilities providing inadequate or insufficient care to residents; having insufficient, unqualified, and untrained staff; and failing to provide residents appropriate medications or storing medications improperly. State officials attributed most of the common problems identified in assisted living facilities to insufficient staffing and inadequate training, exacerbated by high staff turnover and low pay for caregiver staff. The Supreme Court’s Olmstead decision left open questions about the extent to which states could be required to restructure their current long- term care programs for people with disabilities to ensure that care is provided in the most integrated setting appropriate for each person’s circumstances. Interpretation of the Olmstead decision is an ongoing process. While the Supreme Court held in Olmstead that institutionalization of people with disabilities is discrimination under the ADA under certain circumstances, it also recognized that there are limits to what states can do, given available resources and the obligation to provide a range of services for people with disabilities. Most states are responding to the decision by developing plans for how they will serve people with disabilities in less restrictive settings. These plans are works in progress, however, and it is too soon to tell how and when they may be implemented. State responses will also be shaped over time by the resolution of the many pending lawsuits and formal complaints that have been filed against them and others. The Supreme Court held that states may be required to serve people with disabilities in community settings when such placements can be reasonably accommodated. However, it recognized that states’ obligations to provide services are not boundless. Specifically, the Court emphasized that while the ADA’s implementing regulations require reasonable modifications by the state to avoid discrimination against the disabled, those regulations also allow a state to resist requested modifications if they would entail a “fundamental alteration” of the state’s existing services and programs. The Court provided some guidance for determining whether accommodations sought by plaintiffs constitute a reasonable modification or a fundamental alteration of an existing program, which would not be required under the ADA. The Court directed that such a determination should include consideration of the resources of the state, the cost of providing community-based care to the plaintiffs, the range of services the state provides to others with disabilities, and the state’s obligation to provide those services equitably. The Court suggested that if a state were to “demonstrate that it had a comprehensive, effectively working plan for placing qualified persons with mental disabilities in less restrictive settings, and a waiting list that moved at a reasonable pace not controlled by the state’s endeavors to keep its institutions fully populated, the reasonable modification standard would be met.” The single most concrete state response to the Olmstead decision has been to develop plans that demonstrate how the states propose to serve people with disabilities in less restrictive settings, as suggested by the Supreme Court. HCFA provided early guidance and technical assistance to states in these efforts. But most of these state plans are still works in progress, and it is too soon to tell how and when they will be implemented. To help states with their Olmstead planning activities, between January and July 2000 , HCFA issued general guidance to the states in developing “comprehensive, effectively working plans” to ensure that individuals with disabilities receive services in the most integrated setting appropriate. To encourage states to design and implement improvements in their community-based long-term care services, HCFA also announced a set of competitive grant initiatives, funded at nearly $70 million, to be awarded by October 1, 2001. (See app. II for details about these competitive grants.) In addition, HCFA made $50,000 starter grants available to each of the states and territories, with no financial match required, to assist their initial planning efforts. As of July 2001, 49 states (every state except Arizona) had applied for and received these starter grants, which must be used to obtain consumer input and improve services. As of September 2001, an estimated 40 states and the District of Columbia had task forces or commissions that were addressing Olmstead issues. According to the National Conference of State Legislatures (NCSL), which is tracking the states’ efforts, the goal for most of these states was to complete initial plans by the end of this year or early 2002. Ten states were not developing Olmstead plans, for a variety of reasons. NCSL reported that some of the states that were not planning already have relatively extensive home and community care programs and may believe that such planning is not necessary. As the result of a 1999 lawsuit settlement, for example, Oregon had developed a 6-year plan to eliminate the waiting list of more than 5,000 people for its waiver program serving people with developmental disabilities. Moreover, Oregon was the only state to dedicate more than half of its 1999 Medicaid long-term care spending to home and community-based services. Vermont also is not working on an Olmstead plan because it has implemented a range of activities over the years that are related to downsizing institutions and moving toward home and community-based care. On the basis of a preliminary review of about 14 draft Olmstead plans, NCSL reported that the contents are quite variable. A few plans are relatively extensive and well documented, including determinations of need, inventories of available services, funding needs, and roadmaps for what needs to be done. According to NCSL, other plans consist primarily of lists of recommendations to the governor or state legislature, without specifying how the recommendations are to be implemented, by which agencies, or in what time frame. It is too early to tell how or when the states will implement the steps they propose in their Olmstead plans. On the basis of the information collected by NCSL, it appears that few states have passed legislation relating to Olmstead—for example, appropriating funding to expand community residential options or authorizing program changes. As of July 2001, NCSL was able to identify 15 Olmstead-related bills that were considered in eight states during 2001, of which 4 were enacted. One bill simply provided for development of the state plan, while others appropriated funding, required a new home and community-based attendant services program, or proposed long-term care reforms. Increased state legislative activity is expected in 2002, as more Olmstead plans are completed. State responses to Olmstead also will be influenced by the resolution of the numerous lawsuits and formal complaints that have been filed and are still pending. Olmstead-related lawsuits, now being considered in almost half the states, often seek specific Medicaid services to meet the needs of people with disabilities. Lawsuits on behalf of people with disabilities seeking Medicaid and other services in community-based settings often are initiated by advocacy organizations. According to the National Association of Protection and Advocacy Systems (NAPAS), Protection and Advocacy Organizations report that about 30 relevant cases concerning access to publicly funded health services whose resolution may relate to Olmstead are still active. Plaintiffs in the cases include residents of state psychiatric facilities, developmental disabilities centers, and nursing homes, as well as people living in the community who are at risk of institutionalization. Their complaints raise such issues as prompt access to community-based services, the limitations of Medicaid waiver programs, and the need for assessments to determine the most integrated setting appropriate to each individual. It is difficult to predict the overall outcome of these active cases since each involves highly individual circumstances, including the nature of the plaintiffs’ concerns and each state’s unique Medicaid program structure and funding. According to a NAPAS representative, two recent cases in Hawaii and Louisiana illustrate some of the issues raised by Olmstead- related lawsuits and how they were resolved through voluntary settlements. The Hawaii case shows how one federal court addressed the state’s obligation to move people off its waiting lists at a reasonable pace, applying the Olmstead decision to people with disabilities who were not institutionalized. The plaintiffs claimed that Hawaii was operating its waiver program for people with mental retardation and developmental disabilities in a manner that violated the ADA and Medicaid law. The plaintiffs were living at home while on a waiting list for community-based waiver services—the majority of the plaintiffs had been on the waiting list for over 90 days and some for over 2 years. They could have obtained services if they had been willing to live in institutions, but they wished to stay in the community. The court found that Olmstead applied to the case even though the plaintiffs were not institutionalized. Hawaii argued that the plaintiffs were on the waiting list because of a lack of funds and that providing services for more people would cause the state to exceed funding limits set up in its waiver program. The court rejected the state’s argument and held that funding shortages did not meet the definition of a “fundamental alteration.” The court also found that Hawaii did not provide evidence of a comprehensive plan to keep the waiting list moving at a reasonable pace, suggested by the Olmstead opinion. In July 2000, the parties settled the case by agreeing that Hawaii would fund 700 additional community placements over 3 years and move people from the waiting list at a reasonable pace. The Louisiana case was filed in 2000 on behalf of people living in nursing homes, or at imminent risk of nursing home admission, who were waiting for services offered through three Medicaid HCBS waivers that provided personal attendant care, adult day health care, and other services to elderly and disabled adults. The plaintiffs claimed that the state was failing to provide services in the most integrated setting as required by the ADA. They also claimed that the state was not following Medicaid statutory requirements to provide services with reasonable promptness and to allow choice among available services. As part of a settlement of this case, Louisiana agreed to make all reasonable efforts to expand its capacity to provide home and community-based services and to reduce waiting lists in accordance with specific goals. For example, the state will increase the number of waiver slots by a minimum of 650 slots by 2002, with additional increases planned through 2005. The state also agreed to apply to CMS to add a personal care service option to its Medicaid plan, thereby making personal care services available to all eligible Medicaid recipients who are in nursing homes, at imminent risk of nursing home admission, or recently discharged. In addition, the state agreed to determine the status of persons currently on waiting lists for waiver services and to take steps to inform Medicaid beneficiaries and health professionals about the full range of available service options. Olmstead issues are also being addressed through a formal complaint resolution process operated by the Office for Civil Rights (OCR) within HHS. As part of its responsibility for enforcing the ADA, OCR receives and helps resolve formal complaints related to the ADA. When OCR receives Olmstead-related complaints from individuals and parties, it works through its regional offices to resolve them by involving the complainants and the affected state agencies. If a complaint cannot be resolved at the state and regional OCR level, OCR’s central office may get involved. Finally, if these steps are not successful, the complaint is referred to the Department of Justice. As of August 2001, no Olmstead-related cases had been referred to the Department of Justice. From 1999 through August 2001, OCR received 423 ADA-related complaints. These complaints generally involved a concern that people did not receive services in the most integrated setting. OCR reported that, as of August 2001, 154 complaints had been settled and 269 remained pending. These complaints had been filed in 36 states and the District of Columbia, with more than half filed in seven states. A recent analysis of 334 Olmstead-related complaints indicated that 228 complaints (68 percent) were related to people residing in institutions. The ongoing resolution of Olmstead-related lawsuits and complaints will help establish precedent for the types of Medicaid program modifications states may have to make to their long-term care programs. Meanwhile, it is difficult to generalize about the potential impact of the many ongoing cases because each case will be decided on its own facts. The extent of what federal courts will require states to do to comply with the ADA as interpreted in Olmstead will become more clear over time as additional cases are resolved. In the wake of the Olmstead decision, states may face growing pressures to expand services for the elderly and other people with disabilities in a variety of settings that allow for a range of choices. Despite the numerous activities under way at the state and federal levels to respond to this decision, the full implications of the Olmstead decision are far from settled. Ongoing complaints and legal challenges continue to prompt states to make incremental changes at the same time that they continue to frame states’ legal obligations for providing services to the disabled. States face challenges in determining who and how many people meet the criteria of needing and seeking services and also in balancing the resource and service needs of eligible individuals with the availability of state funds. This balancing of needs and resources will be an even greater issue in the coming years as the baby boom generation ages and adds to the demand for long-term care services. While Medicaid has a prominent role in supporting the long-term care services provided today, other financing sources also play an important role in our current system. These include private resources—including out-of-pocket spending, private insurance, and family support—as well as many other public programs. Finding ways to develop and finance additional service capacity that meets needs, allows choice, and ensures quality care will be a challenge for this generation, their families, and federal, state, and local governments. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions you or the other Committee members may have. For more information regarding this testimony, please contact me at (202) 512-7114 or Katherine Iritani at (206) 287-4820. Bruce D. Greenstein, Behn Miller, Suzanne C. Rubins, Ellen M. Smith, and Stan Stenersen also made key contributions to this statement. In January 2001, HCFA announced a set of grant initiatives called “Systems Change for Community Living.” These grants are intended to encourage states to design and implement improvements in community long-term support services. Total funding for these grants is $70 million for fiscal year 2001. States will have 36 months to expend the funds. States and other organizations, in partnership with their disabled and elderly communities, were invited to submit proposals for one or more of these four distinct grant programs (see table 2). Agency officials reported receiving 161 separate applications for these grants for more than $240 million. The agency expects all grant awards to be made by October 1, 2001.
In the Olmstead case, the Supreme Court decided that states were violating title II of the Americans with Disabilities Act of 1990 (ADA) if they provided care to disabled people in institutional settings when they could be a appropriately served in a home or community-based setting. Considerable attention has focused on the decision's implications for Medicaid, the dominant public program supporting long-term care institutional, home, and community-based services. Although Medicaid spending for home and community-based service is growing, these are largely optional benefits that states may or may not choose to offer, and states vary widely in the degree to which they cover them. The implications of the Olmstead decision--in terms of the scope and the nature of states' obligation to provide home and community-based long-term care services--are still unfolding. Although the Supreme Court ruled that providing care in institutional settings may violate the ADA, it also recognized that there are limits to what states can do, given the available resources and the obligation to provide a range of services for disabled people. The decision left many open questions for states and lower courts to resolve. State programs also may be influenced over time as dozens of lawsuits and hundreds of formal complaints seeking access to appropriate services are resolved.
Current domestic uses of UAS are limited and include law enforcement, monitoring or fighting forest fires, border security, weather research, and scientific data collection. UAS have a wide-range of potential uses, including commercial uses such as pipeline, utility, and farm fence inspections; vehicular traffic monitoring; real estate and construction site photography; relaying telecommunication signals; and crop dusting. FAA’s long-range goal is to permit, to the greatest extent possible, routine UAS operations in the national airspace system while ensuring safety. Using UAS for commercial purposes is not currently allowed in the national airspace. As the list of potential uses for UAS grows, so do the concerns about how they will affect existing military and non-military aviation as well as concerns about how they might be used. Domestically, state and local law enforcement entities represent the greatest potential use of small UAS in the near term because small UAS can offer a simple and cost effective solution for airborne law enforcement activities for agencies that cannot afford a helicopter or other larger aircraft. For example, federal officials and one airborne law enforcement official said that a small UAS costing between $30,000 and $50,000 is more likely to be purchased by state and local law enforcement entities because the cost is nearly equivalent to that of a patrol car. According to recent FAA data, 12 state and local law enforcement entities have a Certificate of Waiver or Authorization (COA) while an official at the Department of Justice said that approximately 100 law enforcement entities have expressed interest in using a UAS for some of their missions. According to law enforcement officials with whom we spoke, small UAS are ideal for certain types of law enforcement activities. Officials anticipate that small UAS could provide support for tactical teams, post-event crime scene analysis and critical infrastructure photography. Officials said that they do not anticipate using small UAS for routine patrols or missions that would require flights over extended distances or time periods. FAA has been working with the Department of Justice’s National Institute of Justice to develop a COA process through a memorandum of understanding to better meet the operational requirements of law enforcement entities. While the memorandum of understanding establishing this COA process has not been finalized, there are two law enforcement entities that are using small UAS on a consistent basis for their missions and operations. The proposed process would allow law enforcement entities to receive a COA for training and performance evaluation. When the entity has shown proficiency in operating its UAS, it would then receive an operational COA allowing it to operate small UAS for a range of missions. In May 2012, FAA stated that it met its first requirement to expedite the COA process for public safety entities. FAA’s reauthorization also required the agency to enter into agreements with appropriate government agencies to simplify the COA process and allow a government public safety agency to operate unmanned aircraft weighing 4.4 pounds or less if flown within the line of sight of the operator, less than 400 feet above the ground, and during daylight conditions, among others stipulations. In 2008, we reported that UAS could not meet the aviation safety requirements developed for manned aircraft and posed several obstacles to operating safely and routinely in the national airspace system. Sense and avoid technologies. To date, no suitable technology has been identified that would provide UAS with the capability to meet the detect, sense, and avoid requirements of the national airspace system. Our ongoing work indicates that research has been carried out to mitigate this, but the inability for UAS to sense and avoid other aircraft or objects remains an obstacle. With no pilot to scan the sky, UAS do not have an on-board capability to directly “see” other aircraft. Consequently, the UAS must possess the capability to sense and avoid an object using on-board equipment, or with the assistance of a human on the ground or in a chase aircraft, or by other means, such as radar. Many UAS, particularly smaller models, will likely operate at altitudes below 18,000 feet, sharing airspace with other vehicles or objects. Sensing and avoiding other vehicles or objects represents a particular challenge for UAS, because other vehicles or objects at this altitude often do not transmit an electronic signal to identify themselves and, even if they did, many small UAS, do not have equipment to detect such signals if they are used and may be too small to carry such equipment. Command and control communications. Similar to what we previously reported, ensuring uninterrupted command and control for UAS remains a key obstacle for safe and routine integration into the national airspace. Without such control, the UAS could collide with another aircraft or crash, causing injury or property damage. The lack of dedicated radiofrequency spectrum for UAS operations heightens the possibility that an operator could lose command and control of the UAS. Unlike manned aircraft that use dedicated radio frequencies, non-military UAS currently use undedicated frequencies and remain vulnerable to unintentional or intentional interference. To address the potential interruption of command and control, UAS generally have pre-programmed maneuvers to follow if the command and control link becomes interrupted (called a “lost-link scenario”). However, these procedures are not standardized across all types of UAS and, therefore, remain unpredictable to air traffic controllers who have responsibility for ensuring safe separation of aircraft in their airspace. Standards. A rigorous certification process with established performance thresholds is needed to ensure that UAS and pilots meet safety, reliability, and performance standards. Minimum aviation system standards are needed in three areas: performance; command and control communications; and sense and avoid. In 2004, RTCA, a standards-making body sponsored by FAA, established a federal advisory committee called the Special Committee 203 (or SC 203), to establish minimum performance standards for FAA to use in developing UAS regulations. Individuals from academia and the private sector serve on the committee, along with FAA, NASA, and DOD officials. ASTM International Committee F38 on UAS, an international voluntary consensus standards-making body, is working with FAA to develop standards to support the integration of small UAS into the national airspace. Regulations. FAA regulations govern the routine operation of most aircraft in the national airspace system. do not contain provisions to address issues relating to unmanned aircraft. As we highlighted in our previous report, existing regulations may need to be modified to address the unique characteristics of UAS. Today, UAS continue to operate as exceptions to the regulatory framework rather than being governed by it. This has limited the number of UAS operations in the national airspace, and that limitation has, in turn, contributed to the lack of operational data on UAS in domestic operations previously discussed. One industry forecast noted that growth in the non-military UAS market is unlikely until regulations allow for the routine operation of UAS. Without specific and permanent regulations for safe operation of UAS, federal stakeholders, including DOD, continue to face challenges. The lack of final regulations could hinder the acceleration of safe and routine integration of UAS into the national airspace. Given the remaining obstacles to UAS integration, we stated in 2008 that Congress should consider creating an overarching body within FAA to coordinate federal, academic, and private-sector efforts in meeting the safety challenges of allowing routine access to the national airspace system. While it has not created this overarching body, FAA’s Joint Planning and Development Office has taken on a similar role. In addition, Congress set forth requirements for FAA in its February 2012 reauthorization to facilitate UAS integration. Additionally, we made two recommendations to FAA related to its planning and data analysis efforts to facilitate the process of allowing UAS routine access to the national airspace, which FAA has implemented. Title 14, Code of Federal Regulations (14 CFR). DHS is one of several partner agencies of FAA’s Joint Planning and Development Office (JPDO) working to safely integrate UAS into the national airspace. TSA has the authority to regulate the security of all transportation modes, including non-military UAS, and according to TSA officials, its aviation security efforts include monitoring reports on potential security threats regarding the use of UAS. While UAS operations in the national airspace are limited and take place under closely controlled conditions, this could change if UAS have routine access to the national airspace system. Further, DHS owns and uses UAS. Security is a significant issue that could be exacerbated with an increase in the number of UAS, and could impede UAS use even after all other obstacles have been addressed. In 2004, TSA issued an advisory in which it stated that there was no credible evidence to suggest that terrorist organizations plan to use remote controlled aircraft or UAS in the United States. However, the TSA advisory also provided that the federal government remains concerned that UAS could be modified and used to attack key assets and infrastructure in the United States. TSA advised individuals to report any suspicious activities to local law enforcement and the TSA General Aviation Hotline. Security requirements have yet to be developed for UAS ground control stations—the UAS equivalent of the cockpit. Legislation introduced in the 112th Congress would prohibit the use of UAS as weapons while operating in the national airspace. In our 2008 report, we recommended that the Secretary of Homeland Security direct the Administrator of TSA to examine the security implications of future, non-military UAS operations in the national airspace and take any actions deemed appropriate. TSA agreed that consideration and examination of new aviation technologies and operations is critical to ensuring the continued security of the national airspace. According to TSA officials, TSA continues to work with the FAA and other federal agencies concerning airspace security by implementing security procedures in an attempt to protect the National Airspace System. Examples of this collaboration include the coordinated efforts to allow access to temporary flight restricted airspace such as those put in place for Presidential travel and DHS Security Events. However, to date, neither DHS nor TSA has taken any actions to implement our 2008 recommendation. According to TSA officials, TSA believes its current practices are sufficient and no additional actions have been needed since we issued our recommendation. DHS is also an owner and user of UAS. Since 2005, CBP has flown UAS for border security missions. FAA granted DHS authority to operate UAS to support its national security mission along the United States northern and southern land borders, among other areas. Recently, DHS officials told us that DHS has also flown UAS over the Caribbean to search for narcotics-carrying submarines and speedboats. According to DHS officials, CBP owns ten UAS that it operates in conjunction with other agencies for various missions. As of May 2012, CBP has flown missions to support six federal and state agencies along with several DHS agencies. These missions have included providing the National Oceanic and Atmospheric Administration with videos of damaged dams and bridges where flooding occurred or was threatened, and providing surveillance for DHS’s Immigration and Customs Enforcement over a suspected smuggler’s tunnel. DHS, DOD, and NASA, are working with FAA to identify and evaluate options to increase UAS access in the national airspace. DHS officials reported that if funding was available, they plan to expand their fleet to 24 total UAS that would be operational by fiscal year 2016, including 11 on the southwest border. The DHS Inspector General reviewed CBP’s actions to establish its UAS program, the purpose of which is to provide reconnaissance, surveillance, targeting, and acquisition capabilities across all CBP areas of responsibility. The Inspector General assessed whether CBP has established an adequate operation plan to define, prioritize, and execute its unmanned aircraft mission. The Inspector General’s May 2012 report found that CBP had not achieved its scheduled or desired level of flight hours for its UAS. It estimated that CBP used its UAS less than 40 percent of the time it would have expected. Our ongoing work has identified several UAS issues that, although not new, are emerging as areas of further consideration in light of the efforts towards safe and routine access to the national airspace. These include concerns about 1) privacy as it relates to the collection and use of surveillance data, 2) the use of model aircraft, which are aircraft flown for hobby or recreation, and 3) the jamming and spoofing of the Global Positioning System (GPS). Privacy concerns over collection and use of surveillance data. Following the enactment of the UAS provisions of the 2012 FAA reauthorization act, members of Congress, a civil liberties organization, and others have expressed concern that the increased use of UAS for surveillance and other purposes in the national airspace has potential privacy implications. Concerns include the potential for increased amounts of government surveillance using technologies placed on UAS as well as the collection and use of such data. Surveillance by federal agencies using UAS must take into account associated constitutional Fourth Amendment protections against unreasonable searches and seizures. In addition, at the individual agency level, there are multiple federal laws designed to provide protections for personal information used by federal agencies. While the 2012 FAA reauthorization act contains provisions designed to accelerate the safe integration of UAS into the national airspace, proposed legislation in the 112th session of Congress, seeks to limit or serve as a check on uses of UAS by, for example, limiting the ability of the federal government to use UAS to gather information pertaining to criminal conduct without a warrant. Currently, no federal agency has specific statutory responsibility to regulate privacy matters relating to UAS. UAS stakeholders disagreed as to whether the regulation of UAS privacy related issues should be centralized within one federal agency, or if centralized, which agency would be best positioned to handle such a responsibility. Some stakeholders have suggested that FAA or another federal agency should develop regulations for the types of allowable uses of UAS to specifically protect the privacy of individuals as well as rules for the conditions and types of data that small UAS can collect. Furthermore, stakeholders with whom we spoke said that developing guidelines for technology use on UAS ahead of widespread adoption by law enforcement entities may preclude abuses of the technology and a negative public perception of UAS. Representatives from one civil liberties organization told us that since FAA has responsibility to regulate the national airspace, it could be positioned to handle responsibility for incorporating rules that govern UAS use and data collection. Some stakeholders have suggested that the FAA has the opportunity and responsibility to incorporate such privacy issues into the small UAS rule that is currently underway and in future rulemaking procedures. However, FAA officials have said that regulating these sensors is outside the FAA’s mission, which is primarily focused on aviation safety, and has proposed language in its small UAS Notice of Proposed Rulemaking to clarify this. Model aircraft. According to an FAA official with whom we spoke and other stakeholders, another concern related to UAS is the oversight of the operation of model aircraft—aircraft flown for hobby or recreation—capable of sustained flight in the atmosphere and a number of other characteristics. Owners of model aircraft do not require a COA to operate their aircraft. Furthermore, as part of its 2012 reauthorization act, FAA is prohibited from developing any rule or regulation for model aircraft under a specified set of conditions. However, the 2012 reauthorization act also specifies that nothing in the act’s model aircraft provisions shall be construed to limit FAA’s authority to take enforcement action against the operator of a model aircraft who endangers the safety of the national airspace system. The Federal Bureau of Investigation report of the arrest and criminal prosecution of a man plotting to use a large remote-controlled model aircraft filled with plastic explosives to attack the Pentagon and U.S. Capitol in September 2011 has highlighted the potential for model aircraft to be used for non-approved or unintended purposes. The Academy of Model Aeronautics, which promotes the development of model aviation as a recognized sport and represents a membership of over 150,000, published several documents to guide model aircraft users on safety, model aircraft size and speed, and use. For example, the Academy’s National Model Aircraft Safety Code specifies that model aircraft will not be flown in a careless or reckless manner and will not carry pyrotechnic devices that explode or burn, or any device that propels a projectile or drops any object that creates a hazard to persons or property (with some exceptions). Aeronautics also provides guidance on “sense and avoid” to its members, such as a ceiling of 400 feet above ground of aircraft weighing 55 pounds or less. However, apart from FAA’s voluntary safety standards for model aircraft operators, FAA has no regulations relating to model aircraft. Currently, FAA does not require a license for any model aircraft operators, but according to FAA, the small UAS Notice of Proposed Rule Making, under development and expected to be published late 2012, may contain a provision that requires certain model aircraft to be registered. GPS jamming and spoofing. The Academy of Model Aeronautics National Model Aircraft Safety Code allows members to fly devices that burn producing smoke and are securely attached to the model aircraft and use rocket motors if they remain attached to the model during flight. Model rockets may be flown but not launched from a model aircraft. GPS spoofing is when counterfeit GPS signals are generated for the purpose of manipulating a target receiver’s reported position and time. Todd E. Humphreys, Detection Strategy for Cryptographic GNSS Anti-Spoofing, IEEE Transactions on Aerospace and Electronics Systems (August 2011). cost devices that jam GPS signals are prevalent. According to one industry expert, GPS jamming would become a larger problem if GPS is the only method for navigating a UAS. This problem can be mitigated by having a second or redundant navigation system onboard the UAS that is not reliant on GPS. In addition, a number of federal UAS stakeholders we interviewed stated that GPS jamming is not an issue for the larger, military-type UAS, as they have an encrypted communications link on the aircraft. A stakeholder noted that GPS jamming can be mitigated for small UAS by encrypting its communications, but the costs associated with encryption may make it infeasible. Recently, researchers at the University of Texas demonstrated that the GPS signal controlling a small UAS could be spoofed using a portable software radio. The research team found that it was straightforward to mount an intermediate-level spoofing attack but difficult and expensive to mount a more sophisticated attack. The emerging issues we identified not only may exist as part of efforts to safely and routinely integrate UAS into the national airspace, but may also persist once integration has occurred. Thus, these issues may warrant further examination both presently and in the future. Chairman McCaul, Ranking Member Keating, and Members of the Subcommittee, this concludes my prepared statement. We plan to report more fully this fall on these same issues, including the status of efforts to address obstacles to the safe and routine integration of UAS into the national airspace. I would be pleased to answer any questions at this time. For further information on this testimony, please contact Gerald L. Dillingham, Ph.D., at (202) 512-2834 or dillinghamg@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Maria Edelstein, Assistant Director; Amy Abramowitz; Erin Cohen; John de Ferrari; Colin Fallon; Rebecca Gambler; Geoffrey Hamilton; David Hooper; Daniel Hoy; Joe Kirschbaum; Brian Lepore; SaraAnn Moessbauer; Faye Morrison; Sharon Pickup; Tina Won Sherman; and Matthew Ullengren. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
UAS aircraft do not carry a human operator on board, but instead operate on pre-programmed routes or by following commands from pilot-operated ground stations. An aircraft is considered to be a small UAS if it is 55 pounds or less, while a large UAS is anything greater. Current domestic uses of UAS are limited and include law enforcement, monitoring or fighting forest fires, border security, weather research, and scientific data collection by the federal government. FAA authorizes military and non-military UAS operations on a limited basis after conducting a case-by-case safety review. Several other federal agencies also have a role or interest in UAS, including DHS. In 2008, GAO reported that safe and routine access to the national airspace system poses several obstacles. This testimony discusses 1) obstacles identified in GAO’s previous report on the safe and routine integration of UAS into the national airspace, 2) DHS’s role in the domestic use of these systems, and 3) preliminary observations on emerging issues from GAO’s ongoing work. This testimony is based on a 2008 GAO report and ongoing work, and is focused on issues related to non-military UAS. In ongoing work, GAO analyzed FAA’s efforts to integrate UAS into the national airspace, the role of other federal agencies in achieving safe and routine integration, and other emerging issues; reviewed FAA and other federal agency efforts and documents; and conducted selected interviews with officials from FAA and other federal, industry, and academic stakeholders. GAO earlier reported that unmanned aircraft systems (UAS) could not meet the aviation safety requirements developed for manned aircraft and posed several obstacles to operating safely and routinely in the national airspace system. These include 1) the inability for UAS to detect, sense, and avoid other aircraft and airborne objects in a manner similar to “see and avoid” by a pilot in a manned aircraft; 2) vulnerabilities in the command and control of UAS operations; 3) the lack of technological and operational standards needed to guide the safe and consistent performance of UAS; and 4) the lack of final regulations to accelerate the safe integration of UAS into the national airspace. GAO stated in 2008 that Congress should consider creating an overarching body within the Federal Aviation Administration (FAA) to address obstacles for routine access. FAA’s Joint Planning and Development Office (JPDO) has taken on a similar role. FAA has implemented GAO’s two recommendations related to its planning and data analysis efforts to facilitate integration. The Department of Homeland Security (DHS) is one of several partner agencies of JPDO working to safely integrate UAS into the national airspace. Since 2005, FAA has granted DHS authority to operate UAS to support its national security mission in areas such as the U.S. northern and southern land borders. DHS’s Transportation Security Administration (TSA) has the authority to regulate security of all modes of transportation, including non-military UAS, and according to TSA officials, its aviation security efforts include monitoring reports on potential security threats regarding the use of UAS. Security considerations could be exacerbated with routine UAS access. TSA has not taken any actions to implement GAO’s 2008 recommendation that it examine the security implications of future, non-military UAS. GAO’s ongoing work has identified several UAS issues that, although not new, are emerging as areas of further consideration in light of greater access to the national airspace. These include concerns about privacy relating to the collection and use of surveillance data. Currently, no federal agency has specific statutory responsibility to regulate privacy matters relating to UAS. Another emerging issue is the use of model aircraft (aircraft flown for hobby or recreation) in the national airspace. FAA is generally prohibited from developing any rule or regulation for model aircraft. The Federal Bureau of Investigation report of a plot to use a model aircraft filled with plastic explosives to attack the Pentagon and U.S. Capitol in September 2011 has highlighted the potential for model aircraft to be used for unintended purposes. An additional emerging issue is interruption of the command and control of UAS operations through the jamming and spoofing of the Global Positioning System between the UAS and ground control station. GAO plans to report more fully this fall on these issues, including the status of efforts to address obstacles to the safe and routine integration of UAS into the national airspace.
Older adults are being financially exploited by strangers who inundate them with mail, telephone, or Internet scams; unscrupulous financial services providers; and untrustworthy in-home caregivers (see table 1 for more details). For example: Mass marketing scams: Local law enforcement authorities in the four states we visited indicated that investigating and prosecuting the growing number of cases involving interstate and international mass marketing fraud, which often targets older adults, is particularly difficult for them. Interstate or international mass marketing scams include “grandparent scams,” which persuade victims to wire money to bail “grandchildren” out of jail or pay their expenses, and foreign lottery scams that require victims to pay sizeable sums before they can receive their winnings. In 2011, the Federal Bureau of Investigation’s (FBI) Internet Crime Complaint Center received over 300,000 complaints from victims of all ages about online fraud alone, with reported losses of about $485 million. Exploitation by financial services professionals: Older adults may consult with a variety of financial professionals, such as financial planners, broker-dealers, and insurance agents. However, older adults, similar to other consumers, may lack the information to make sound decisions about choosing a financial services provider and protecting their assets from exploitation. As a result, they may unknowingly put themselves at risk of financial exploitation. Older adults can be sold what they believe to be legitimate investments but are actually fraudulent products that hold little or no value, or may be fooled by financial professionals who use questionable tactics to market financial products, such as “free lunch seminars” at which financial professionals seek to sell financial products to older adults during a free meal. Exploitation by in-home caregivers: Local officials cited exploitation by in-home caregivers—who range from personal care aides who provide non-medical assistance to home health aides who may check an older adult’s vital signs—as a type of abuse that is difficult to prevent, in part because these older adults may rely on and trust their caregivers. For example, a caregiver may be given access to an older adult’s ATM or credit card to help with banking or grocery shopping, and later be found withdrawing money or purchasing items for themselves. We identified a number of ways the federal government was supporting or could further support state and local efforts to combat elder financial exploitation. Local law enforcement officials we met with indicated it is not clear how they should obtain the federal support they need to respond to interstate and international mass marketing fraud cases. Justice officials told us they believe that local officials know which federal employees to contact; however, state and local law enforcement officials told us it would be helpful to have more specific information. Cases that local officials do not refer to a federal agency due to a lack of correct contact information may not be investigated or prosecuted by either federal or local authorities. In our November 2012 report, we recommended that the Attorney General conduct outreach to state and local law enforcement agencies to clarify the process for contacting the federal government in these cases and the ways in which the federal government could provide support. Justice agreed with this recommendation, and in December 2012 held a meeting to begin identifying points of contact both within and outside the Department, such as FBI field offices, US Attorneys’ offices, the Internet Crime Complaint Center, and FTC’s Consumer Sentinel database. Justice noted that it will develop an implementation plan and timeline to initiate outreach to the appropriate state and local agencies. In addition to not knowing whom to contact, state and local law enforcement officials in the four states we visited told us that they are concerned that federal agencies do not take enough of the cases that are referred to them. For example, a law enforcement official from California described a case of widespread interstate check fraud, expressing frustration with federal agencies that would not provide any support when he requested it. Federal officials, on the other hand, told us that they cannot take all cases referred to them by state and local law enforcement and that they must prioritize their caseload to make the best use of their limited resources. Justice and FTC officials said they tend to focus on larger cases in which many victims were affected or a significant amount of money was lost, and Justice’s U.S. Attorneys also apply regional priorities, such as the vulnerability (including age) of the victim, when determining which cases to take. Even if federal agencies choose not to take a case a state or local agency refers to them, Justice officials told us that consistent referrals of cases by state and local authorities allow them to identify patterns or combine several complaints against the same individual into one case. Federal agencies have made some efforts to provide safeguards to prevent exploitation by financial services professionals, which was cited as a challenge by public officials in all four states we visited. When it comes to preventing the sale to older adults of unsuitable or fraudulent investments, SEC and CFPB have each taken steps to help older adults avoid being exploited. SEC and CFPB have conducted research related to investment fraud that targets older adults, and in August 2012, SEC released a study on financial literacy among investors and stated the agency’s desire to develop a strategy for increasing the financial literacy of certain groups, including older adults. Further, there is a link on SEC’s website to Financial Industry Regulatory Authority (FINRA), information consumers can use to check a financial services provider’s qualifications and to understand the many designations used by securities professionals. CFPB also issued a report in 2013 addressing how information about financial advisors and their credentials should be provided to older adults. To prevent exploitation by in-home caregivers—also identified as a challenge by officials in the four states we visited— the Patient Protection and Affordable Care Act of 2010 required the Centers for Medicare and Medicaid Services to implement the National Background Check Program, which encourages but does not require states to adopt safeguards to protect clients of in-home caregivers. This program provides grants to states to conduct background checks for employees of long-term care facilities and providers, such as home health agencies and personal care service providers. As of November 2012, 19 states were participating. According to the National Conference of State Legislatures, many states require agencies to conduct background checks before employing in-home caregivers who are paid by Medicaid or with other state funds. These laws, however, vary greatly in their breadth and scope and in the amount of flexibility afforded the agencies when they use the checks to make hiring decisions. For example, Napa County, California, has initiated an innovative paid in-home caregiver screening initiative. Before in-home caregivers can work in that county, they must submit to a background check and obtain a permit annually. Other federal efforts are broader in scope rather than focusing on a particular type of elder financial exploitation, such as those covering public awareness, banks, collaboration among agencies, and data collection. State and local officials told us that older adults need more information about what constitutes elder financial exploitation in order to know how to avoid it. At the state level, the Pennsylvania Attorney General’s Office has published a guide on how seniors can avoid scams and fraud, and in Cook County, Illinois, the Senior Law Enforcement Academy within the Sheriff’s Department instructs older adults in how to prevent elder financial exploitation. At the federal level, each of the seven federal agencies we reviewed independently produces educational materials that could help prevent elder financial exploitation. However, these seven agencies do not conduct their activities as part of a broader coordinated approach. In previous work, we found that agencies can use limited funding more efficiently by coordinating their activities and can strengthen their collaboration by establishing joint strategies. The need to increase coordination of efforts to promote public awareness in this area was discussed in 2012 at a high-level multi-agency meeting on elder justice. One participant observed that federal efforts to promote awareness are unorganized and uncoordinated, and one expert noted that there is a clear need for a strategic, multifaceted public awareness campaign. In our November 2012 report, we recommended that the federal government take a more strategic approach to its efforts to increase public awareness of elder financial exploitation. HHS has begun to act on this recommendation, as described below. In our November 2012 report, we could identify no federal requirements for banks to train employees to recognize or report elder financial exploitation, even though they are well-positioned to identify and report it because they are able to observe it firsthand. For example, a bank teller who sees an older adult regularly is likely to notice if that individual is accompanied by someone new and seems pressured to withdraw money or if the older adult suddenly begins to wire large sums of money internationally. However, many social services and law enforcement officials we spoke with indicated banks do not always recognize or report exploitation or provide the evidence needed to investigate it. AoA is considering collaborating with one large national bank on a project to develop bank training on elder financial exploitation. In addition, financial institutions are required to file Suspicious Activity Reports (SARs) of potentially illegal bank transactions that involve, individually or in the aggregate, at least $5,000 with FinCEN, which has issued an advisory to banks that describes elder financial exploitation and its potential indicators. Our November 2012 report recommended that CFPB develop a plan to educate bank staff on elder financial exploitation. CFPB concurred with our recommendation and has begun to share information on currently available training programs with banks and industry associations. Federal agencies have taken some steps to promote and inform collaboration between the social services and criminal justice systems in states, which officials in three of the four states we contacted for our November 2012 report identified as a challenge. These two systems do not respond to exploitation or carry out their work in the same way. The social services system protects and supports victims and the criminal justice system investigates and prosecutes crimes. As a result, there can be difficulties communicating across disciplines and different views regarding limits on information-sharing. Yet due to the nature of elder financial exploitation, collaboration can be an effective means to facilitate case investigation and prosecution. We identified a number of local initiatives to help bridge the gap between social services and criminal justice agencies. For example, in some Pennsylvania and New York counties, multidisciplinary groups meet to discuss and help resolve all types of elder abuse cases. The Philadelphia Financial Exploitation Task Force and financial abuse specialist teams in some California counties, on the other hand, concentrate only on elder financial exploitation cases. At the federal level, a few grants from AoA and Justice to combat elder abuse or other crimes have required or encouraged collaboration, such as the use of multi-disciplinary teams, in states. In our November 2012 report, we recommended that the federal government take steps to help state and local agencies collaborate. HHS has begun to act on this recommendation, as described below. FTC’s Consumer Sentinel Network is an online database that houses millions of consumer complaints available to law enforcement. Sentinel’s roster of 28 current data contributors includes 12 state attorneys general, the FBI’s Internet Crime Complaint Center, and the Council of Better Business Bureaus. More than 2,600 users from over 2,000 law enforcement agencies worldwide use the system to share information, prosecute cases, and pursue leads. FTC (2012) Consumer Sentinel Network Data Book for January - December 2011. decrease the numbers of people who submit complaints. It additionally said that it may be possible to determine if a complaint involves elder fraud using other information in the complaint. We maintain the importance of our recommendation to FTC. Elder financial exploitation is a complex, nationwide problem, and combating it effectively requires a concerted, ongoing effort on the part of states and localities. It also requires support and leadership at the federal level. Each of the seven federal agencies we reviewed is working to address this problem in ways that are consistent with its mission. However, preventing and responding to elder financial exploitation also calls for a more cohesive and deliberate national strategy. This is an opportune time for the federal government to be looking at elder financial exploitation, because the Elder Justice Act of 2009 has established the Elder Justice Coordinating Council (EJCC)—a group of federal agency heads charged with setting priorities, coordinating federal efforts, and recommending actions to ensure elder justice nationwide—which has recently begun to examine these issues. The EJCC can be the vehicle for defining and implementing such a national strategy. To this end, in our November 2012 report we recommended that the EJCC develop a written national strategy for combating elder financial exploitation. Among other things, this strategy should ensure coordination of public awareness activities across federal agencies; promote agency collaboration; and promote investigation and prosecution of elder financial exploitation. The EJCC held an official meeting on May 13, 2013. Its working group presented a number of recommendations, including ones that focused on enhancing interagency collaboration, strategically promoting public awareness, and combating financial exploitation. Next steps will include receiving public comments and drafting a federal agenda for elder justice activities for EJCC consideration. Chairman Terry, Ranking Member Schakowsky, and Members of the Subcommittee, this concludes my statement. I would be happy to answer any questions you might have. For questions about this testimony, please contact Kay Brown at (202) 512-7215 or brownke@gao.gov. Contacts from our Office of Congressional Relations and Office of Public Affairs are on the last page of this statement. Individuals who made key contributions to this testimony include Clarita Mrena, Eve Weisberg, Monika Gomez, Brittni Milam, and James Bennett. Contributing to our November 2012 report were Andrea Dawson, Gary Bianchi, Jessica Botsford, Jason Bromberg, Alicia Cackley, Paul Desaulniers, Holly Dye, Eileen Larence, Jean McSween, Chris Morehouse, Claudine Pauselli, Almeta Spencer, Kate Van Gelder, and Craig Winslow. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Elder financial exploitation is the illegal or improper use of an older adult's funds or property. It has been described as an epidemic with society-wide repercussions. While combating elder financial exploitation is largely the responsibility of state and local social service, criminal justice, and consumer protection agencies, the federal government has a role to play in this area. GAO was asked to testify on the different forms elder financial exploitation can take and the ways federal agencies can help combat it. This testimony is based on information in a report issued in November 2012 (see GAO-13-110). To obtain this information, GAO interviewed public officials in California, Illinois, New York, and Pennsylvania--states that had large elderly populations and initiatives to combat financial exploitation; officials from seven federal agencies; and experts in this field. GAO also reviewed federal strategic plans and other relevant documents, research, laws, and regulations. Older adults are being financially exploited by strangers who inundate them with mail, telephone, or Internet scams; unscrupulous financial services professionals; and untrustworthy in-home caregivers. Local law enforcement authorities in the four states GAO visited indicated that investigating and prosecuting the growing number of cases involving interstate and international mass marketing fraud--such as "grandparent scams," which persuade victims to wire money to bail "grandchildren" out of jail or pay their expenses--is particularly difficult. In addition, older adults, like other consumers, may lack the information needed to make sound decisions when choosing a financial services provider. As a result, they can unknowingly risk financial exploitation by those who use questionable tactics to market unsuitable or illegal financial products. Local officials also noted that it is difficult to prevent exploitation by in-home caregivers, such as home health or personal care aides, individuals older adults must rely on. GAO identified several ways the federal government is, or could be, supporting state and local efforts to combat elder financial exploitation. With regard to mass marketing scams, GAO has recommended that the Department of Justice reach out to law enforcement authorities in states to clarify how they can obtain the federal assistance needed to handle interstate or international mass marketing fraud. To help prevent exploitation by financial services professionals, the Securities and Exchange Commission links to a public website where the qualifications of individual financial services providers can be found, and the Consumer Financial Protection Bureau has issued guidance on how best to convey this information to older adults. To prevent exploitation by in-home caregivers, the Centers for Medicare and Medicaid Services provides grants that fund background checks for employees of agencies that provide these services. Other federal efforts are broader in scope and help combat all types of elder financial exploitation. For example, each of the seven federal agencies GAO reviewed has independently undertaken activities to increase public awareness of this exploitation; however, GAO has recommended that the federal government develop a more strategic approach to these efforts. Further, recognizing the importance of collaboration among those interacting with older adults, GAO has recommended measures to educate bank staff on how to identify potential exploitation and improve collaboration among social service and law enforcement agencies, among others, as they respond to reports of exploitation. GAO has also noted the need for more data on the extent and nature of elder financial exploitation, some of which can be collected from consumer complaints filed with federal agencies. Finally, preventing and responding to elder financial exploitation calls for a more cohesive and deliberate national strategy. To this end, GAO has recommended that the Elder Justice Coordinating Council--a group of federal agency heads charged with setting priorities and coordinating federal efforts to combat elder abuse nationwide--develop a written national strategy for combating elder financial exploitation. In its November 2012 report, GAO made multiple recommendations to federal agencies, and the agencies generally agreed with the recommendations.
On June 12, 2002, Congress passed the Public Health Security and Bioterrorism Preparedness and Response Act of 2002, which requires specific activities related to bioterrorism preparedness and response. For example, it calls for steps to improve the nation’s preparedness for bioterrorism and other public health emergencies by increasing coordination and planning for such events; developing priority countermeasures; and improving state, local, and hospital preparedness and response. The Secretary of HHS is required to provide for the establishment of an integrated system or systems of public health alert communications and surveillance networks among (1) federal, state, and local public health officials; (2) public and private health-related laboratories, hospitals, and other health care facilities; and (3) any other entities that the Secretary determines are appropriate. These networks are to allow for secure and timely sharing and discussion of essential information concerning bioterrorism and other public health emergencies, as well as recommended methods for responding to such an attack or emergency. In addition, no later than 1 year after the enactment of the law, the Secretary, in cooperation with health care providers and state and local public health officials, was to establish any additional technical and reporting standards, including those for network interoperability. Since fiscal year 2002, HHS has funded over $2.7 billion for public health preparedness efforts through grants administered by CDC and just over $1 billion for hospital preparedness grants administered by the Health Resources and Services Administration. To encourage the integration of health care system response plans with public health department plans, HHS has incorporated both public health preparedness and hospital performance goals into the agreements that the department uses to fund state and local public health preparedness improvements. The funding guidance provided by HHS to state and local governments calls for improvements in seven key areas: preparedness planning and readiness assessment, surveillance and epidemiology capacity, laboratory capacity for handling biological agents, laboratory capacity for handling chemical agents, health alert network/communication and IT, risk communication and health information dissemination, and education and training. Over the past year, federal actions to encourage the use of IT for health care delivery and public health have been accelerated. In April 2004, the President established the goal that health records for most Americans should be electronic within 10 years and issued an executive order to “provide leadership for the development and nationwide implementation of an interoperable health information technology infrastructure to improve the quality and efficiency of health care.” As part of this effort, the President tasked the Secretary of HHS to appoint a National Coordinator for Health Information Technology—which he subsequently did 1 week later. The President’s executive order called for the Coordinator to develop a strategic plan to guide the implementation of interoperable health IT in the public and private health care sectors. In July 2004, HHS issued a framework for strategic action that includes four broad goals; goal four of that framework is directed at improvements in public health. Further, DHS released the National Response Plan this past January, under which HHS is to continue to lead the federal government in providing public health and medical services during major disasters and emergencies. In this role, HHS is to coordinate all federal resources related to public health and medical services that are made available to assist state, local, and tribal officials during a major disaster or emergency. As we reported in May 2003, IT can play an essential role in supporting federal, state, local, and tribal governments in public health preparedness and response. Development of IT can build upon the existing systems capabilities of state and local public health agencies, not only to provide routine public health functions, but also to support public health emergencies, including bioterrorism. In addition, according to the Institute of Medicine, the rapid development of new IT offers the potential for greatly improved surveillance capacity. Finally, for public health emergencies in particular, the ability to quickly exchange data between providers and public health agencies—or among providers—is crucial in detecting and responding to naturally occurring or intentional disease outbreaks. Because of the dynamic and unpredictable nature of public health emergencies, various types of IT systems may be used during the course of an event. These include surveillance systems, which facilitate the performance of ongoing collection, analysis, and interpretation of disease-related and environmental data so that responders and decision makers can plan, implement, and evaluate public health actions (these systems include devices to collect and identify biological agents from environmental samples, and they make use of IT to record and transmit data); and communications systems, which facilitate the secure and timely exchange of information to the relevant responders and decision makers so that appropriate action can be taken. Other types of IT may also be used, such as diagnostic systems, which identify particular pathogens and those that include data from food, water, and animal testing, but such systems are not among the major federal public health IT initiatives. Although state health departments have primary responsibility for disease surveillance in the United States, total responsibility for surveillance is shared among health care providers: more than 3,000 local county, city, and tribal health departments; 59 state and territorial health departments; more than 180,000 public and private laboratories; and public health officials from multiple federal agencies. In addition, the United States is a member of the World Health Organization, which is responsible for coordinating international disease surveillance and response actions. While health care providers are responsible for the medical diagnosis and treatment of their individual patients, they also have a responsibility to protect public health—a responsibility that includes helping to identify and prevent the spread of infectious diseases. Because health care providers are typically the first health officials to encounter cases of infectious diseases—and have the opportunity to diagnose them—these professionals play an important role in disease surveillance. Generally, state laws or regulations require health care providers to report confirmed or suspected cases of notifiable diseases to their state or local health department. States publish lists of the diseases they consider notifiable and therefore subject to reporting requirements. According to the Institute of Medicine, most states also require health care providers to report any unusual illnesses or deaths, especially those for which a cause cannot be readily established. However, according to CDC, despite state laws requiring the reporting of notifiable diseases, a significant proportion of these cases are not reported, which is a major challenge in public health surveillance. Health care providers rely on a variety of public and private laboratories to help them diagnose cases of notifiable diseases. In some cases, only laboratory results can definitively identify pathogens. Every state has at least one public health laboratory to support its infectious diseases surveillance activities and other public health programs. State laboratories conduct testing for routine surveillance or as part of clinical or epidemiologic studies. For rare or unusual pathogens, these laboratories provide diagnostic tests that are not always available in commercial laboratories. State public health laboratories also provide specialized testing for low-incidence but high-risk diseases such as tuberculosis and botulism. Results from state public health laboratories are used by epidemiologists to document trends and identify events that may indicate an emerging problem. Upon diagnosing a case involving a notifiable disease, local health care providers are required to send the reports to state health departments through state and local disease-reporting systems, which range from paper-based reporting to secure, Internet-based systems. States, through their state and local health departments, have principal responsibility for protecting the public’s health and therefore take the lead in conducting disease surveillance and supporting response efforts. Generally, local health departments are responsible for conducting initial investigations into reports of infectious diseases, employing epidemiologists, physicians, nurses, and other professionals. Local health departments are also responsible for sharing information that they obtain from providers or other sources with the state department of health. State health departments are responsible for collecting surveillance information statewide, coordinating investigations and response activities, and voluntarily sharing surveillance data with CDC and others. States vary in their requirements governing who should report notifiable diseases; in addition, the deadlines for reporting these diseases after they have been diagnosed vary by disease. State health officials conduct their own analyses of disease data to verify cases, monitor the incidence of diseases, and identify possible outbreaks. In reporting their notifiable disease data to CDC, states use multiple and sometimes duplicative systems. States are not legally required to report information on notifiable diseases to CDC, but CDC officials explained that the agency makes such reporting from the states a prerequisite for receiving certain types of CDC funding. Generally, the federal government’s role in disease surveillance is to collect and analyze national disease surveillance data and maintain disease surveillance systems. Federal agencies investigate the causes of infectious diseases and maintain their own laboratory facilities. They also use communications systems to share disease surveillance information. In addition, federal agencies provide funding and technical expertise to support disease surveillance at the state, local, and international levels. Federal agencies such as CDC, the Food and Drug Administration, and DOD conduct disease surveillance using systems that gather data from various locations throughout the country to monitor the incidence of infectious diseases. In addition to using surveillance systems to collect and analyze notifiable disease data reported by states, federal agencies use other surveillance systems to collect data on different diseases or from other sources (e.g., international sources). These systems supplement the state data on notifiable diseases by monitoring surveillance information that states do not collect. In general, surveillance systems are distinguished from one another by the types of infectious diseases or syndromes they monitor and the sources from which they collect data. Some disease surveillance systems rely on groups of selected health care providers who have agreed to routinely supply information from clinical settings on targeted diseases. A relatively new type of surveillance system, known as a syndromic surveillance system, monitors the frequency and distribution of health-related symptoms—or syndromes—among people within a specific geographic area. These syndromic surveillance systems are designed to detect anomalous increases in certain syndromes, such as skin rashes, that may indicate the beginning of an infectious disease outbreak. Some monitor data from hospital and emergency room admissions or data from over-the- counter drug sales. Other data sources may include poison control centers, health plan medical records, first-aid stations, emergency medical service data, insurer claims, and discharge diagnosis information. For syndromic data to be analyzed effectively, information must be timely, and the analysis must take into account the context of the locality from which the data were generated. Because syndromic surveillance systems monitor symptoms and other signs of disease outbreaks instead of waiting for clinically confirmed reports or diagnoses of a disease, some experts believe that syndromic surveillance systems could help public health officials increase the speed with which they may identify outbreaks. However, as we reported last September, syndromic surveillance systems are relatively costly to maintain compared with other types of disease surveillance and are still largely untested. Two federal agencies are involved in major public health IT initiatives that focus on disease surveillance and communications. CDC, one of HHS’s divisions, has primary responsibility for conducting national disease surveillance and developing epidemiological and laboratory tools to enhance surveillance of disease, including public health emergencies. It also provides an array of technical and financial support for state infectious disease surveillance. DHS’s mission involves, among other things, protecting the United States against terrorist attacks, including bioterrorism. Its Science and Technology (S&T) Directorate serves as the department’s primary research and development arm. Its focus is on catastrophic terrorism— threats to the security of the United States that could result in large- scale loss of life and major economic impact. S&T’s work is designed to counter those threats, both by improvements to current technological capabilities and development of new ones. (Other federal agencies’ roles in public health are described in app. II.) CDC’s major IT initiative, known as PHIN, is a national initiative to implement a multiorganizational business and technical architecture for public health information systems. After the 2001 anthrax incidents, CDC was mandated to increase national preparedness and capabilities to respond to naturally occurring diseases and conditions and the deliberate use of all threats, including biological, chemical, and radiological agents. CDC sees PHIN as an essential part of its strategy to achieve this mandate. According to CDC, the PHIN architecture defines and documents the systems needed to support public health identifies the industry standards that are necessary to make these develops the specifications necessary to make these standards do the work of public health; defines integration points for systems to work together to meet the establishes tools and components that support standards-based supports the certification process necessary to establish interoperability. To help achieve its goals, PHIN is also intended to integrate and coordinate existing systems, and CDC makes PHIN software available for optional use by state and local public health agencies. PHIN has substantial size and scope, because it is intended to serve as a comprehensive architecture, information exchange network, and set of services that will integrate existing capabilities and advance the ways in which IT can support public health. It is intended to improve public health systems and networks and to provide a means for exchanging data with other federal agencies, state and local government agencies, the private health care sector, and others. As part of PHIN, CDC has established the PHIN Preparedness initiative, which it describes as striving to accelerate the pace at which jurisdictions acquire or acquire access to public health preparedness systems. This initiative focuses on the near-term aspects of PHIN. According to CDC, the agency and its public health partners have identified a set of functional requirements defining the core capabilities for preparedness systems; these are categorized into six broad functional areas: Early event detection: The early identification of bioterrorism and naturally occurring health events in communities. Outbreak management: The capture and management of information associated with the investigation and containment of a disease outbreak or public health emergency. Connection of laboratory systems: The development and adoption of common specifications and processes to enable public health laboratories to electronically exchange information with public health agencies. Countermeasure and response administration: The management and tracking of measures taken to contain an outbreak or event and to provide protection against a possible outbreak or event. Partner communications and alerting: The development of a nationwide network of integrated communications systems capable of rapid distribution of health alerts and secure communications among public health professionals involved in an outbreak or event. Cross-functional components: Technical capabilities, or components, common across functional areas that are necessary to fully support PHIN Preparedness requirements. CDC officials stated that by September 2005, the agency will expect states to meet PHIN Preparedness requirements in these areas as a condition for receiving public health preparedness funding; CDC expects that this condition on funding will promote a wider adoption of PHIN standards. Table 1 presents communications and surveillance applications that are part of the PHIN initiative (some of which are significant system development efforts in themselves), along with the PHIN Preparedness functional areas that they support. Many of these applications are associated with larger initiatives that predated PHIN (see table 2), which are now incorporated under the PHIN umbrella. For example, the origins of NEDSS date to 1995, when CDC co- authored a report that documented the problems of fragmentation and incompatibility in the nation’s disease surveillance systems. The recommendations in this report led CDC to develop the NEDSS initiative, which was begun in October 1999 and incorporated into PHIN in 2002. As part of its mission to protect the nation against terrorist attacks (including possible bioterrorism), DHS is also pursuing major public health IT initiatives. These initiatives and associated programs, which are primarily focused on signal interpretation and biosurveillance, are described in table 3. Figure 1 illustrates a simplified flow of existing surveillance information and health alerts among local, state, and federal agencies. This diagram does not show all flows of information that would occur in the case of an outbreak. For example, local health agencies may send alerts to health care providers. According to CDC, costs for its PHIN initiatives and applications for fiscal years 2002 through 2005, totaling almost $362 million, are summarized in table 4. Most of these costs support local, state, and federal public health activities. According to DHS, IT costs for its biosurveillance initiatives for fiscal years 2003 through 2005 total about $45 million; these are summarized in table 5. This table does not reflect the total costs for the programs supporting these IT initiatives. CDC and DHS have made progress on federal public health IT initiatives, including CDC’s PHIN initiative, which is intended to provide the nation with integrated public health information systems to counter national civilian public health threats, and two major initiatives at DHS—primarily focused on signal interpretation and biosurveillance—one of which is associated with three other programs. However, while progress has been made, more work remains, particularly in surveillance and data exchange. PHIN communications systems are being used, and improvements to surveillance systems (disease, syndromic, and environmental monitoring) are still being developed. Other PHIN applications are available for optional use by state and local public health officials, but they are not widely used because of system limitations. DHS’s two major biosurveillance IT initiatives are still in the development stage, and one of the associated programs—BioWatch—is operational. However, as initially deployed, BioWatch required modification, because its three IT components did not communicate with each other, requiring redundant data entry. According to DHS, it has developed a solution to this interoperability problem and implemented it at two locations; DHS plans to install that solution in the remaining BioWatch locations. Table 6 briefly describes the status of CDC’s PHIN applications, including operational status, number of installations or users, and future plans. Of the various PHIN applications, one is still in the planning process, two are partially operational, and five are operational. Figure 2 shows the time frames for the planning, development, and implementation of the PHIN applications; these applications vary considerably both in complexity and in time needed to complete implementation. Health Alerting. The Health Alerting application, which is used to broadcast e-mail alerts to state and local public health officials about disease outbreaks, became operational in October 2000. This application provides full-time (24 hours a day, 7 days a week) Internet access and broadcast e-mail and fax capabilities. The Health Alerting application is part of the Health Alert Network initiative, which provides grant funding to states and local public health agencies for enhancement of their IT infrastructures. Using these funds, states and localities have either built their own Health Alert Networks or acquired commercial systems for alerting state and local officials. Some state Health Alert Networks use more sophisticated applications than the CDC Health Alerting application, providing various kinds of alerts based on user profiles and allowing document sharing. Epi-X. Epi-X, which is designed to be a secure, Web-based communications system through which public health professionals share information on public health emergencies, was implemented in December 2000 and is being used by state and local public health officials. Epi-X includes multiple mechanisms for alerting; secure, moderated communications and discussion about disease outbreaks and other acute health events as they evolve; and a searchable report database. Most of the state and local health officials with whom we spoke were satisfied with the system. However, some officials questioned the need for both Health Alerting and Epi-X, since both applications have similar functionality and are used by some of the same public health officials. According to CDC, it is planning to create a common platform for use by both applications. The National Electronic Disease Surveillance System (NEDSS). The NEDSS initiative promotes the use of data and information systems standards for the development of interoperable surveillance systems at federal, state, and local levels. It is intended to minimize the problems of fragmented, disease-specific surveillance systems; however, this goal is still years away from being achieved. A primary goal of NEDSS is the ongoing, automatic capture and analysis of data that are already available electronically. Its system architecture is designed to integrate and replace several current CDC surveillance systems, including the National Electronic Telecommunications System for Surveillance, the HIV/AIDS reporting system, and the systems for vaccine preventable diseases, tuberculosis, and other infectious diseases. In previous fiscal years, CDC funded 50 states and 7 localities. These states and localities can use CDC’s NEDSS Base System or build systems compatible with NEDSS/PHIN standards. The initiative includes an architecture to guide states and CDC as they build NEDSS-compatible systems, which can be either commercial or custom developed. The initiative is also intended to promote the use of data standards to advance the development of interoperable disease surveillance systems at federal, state, and local levels. Besides providing a secure, accurate, and efficient way to collect, process, and transmit data to CDC, the NEDSS Base System is intended to provide a platform upon which program area modules can be built to meet state and program area data needs. (Programs may be focused on specific diseases, populations, or other areas—such as smoking or obesity.) Program area modules are critical to eventually reducing the many program-specific surveillance systems that CDC currently maintains by consolidating the data collection of the various programmatic disease surveillance activities that are currently in place. Although CDC has been developing the NEDSS Base System since 2000, it is still only partially deployed. There are no clear milestones and plans for when the Base System will become fully deployed, although multiple versions of the Base System have been developed and deployed in several states. According to CDC, the NEDSS Base System has been deployed in 5 states since December 2004, and it expects implementation to continue with the 11 remaining states that are planning to use the Base System, but the implementation time frames will depend on when these states are ready to accept the system. Table 7 summarizes the status of NEDSS system implementation across the nation, which shows that about half of the states and localities have operational NEDSS systems. In addition, four NEDSS program area modules are being used, and six are in the process of being developed. Additional program area modules will be developed for other disease-specific areas in the coming years. BioSense. CDC’s BioSense, which the agency describes as an early event detection system, is designed to provide near real-time event detection by using data (without patient names or medical numbers) from existing health-related databases. Although CDC began using BioSense data in late 2003, the BioSense application was implemented for state and local use in May 2004. BioSense is continuously being updated, and current plans for phase two of BioSense development call for enhancements to begin in May 2005. BioSense is a Web-based application that currently provides CDC and state and local users with the ability to view syndromic and prediagnostic data: specifically, Defense and Veterans Affairs ambulatory care data, BioWatch laboratory results, and national clinical labs data. Initially, CDC also provided data on sales of over-the-counter medication, but these were later discontinued. BioSense data are provided in the form of data reports displayed in various ways, rather than as raw data that can be input to analytical systems. Although CDC uses BioSense for a number of federal bioterrorism preparedness activities, BioSense is not extensively used by the state and local public health officials with whom we spoke, primarily because of limitations in the data and its presentation. These officials stated that the DOD and VA data were not useful to them, either because they were in locations without large military or veteran populations, or because they could get similar data elsewhere. For instance, many of these officials have access to local syndromic surveillance systems, which better fit their needs because the systems have better capabilities or because they provide data that are more timely than BioSense data. Some of these officials stated that they would prefer CDC to provide data for them to conduct their own analyses, especially data from national sources such as clinical laboratories, rather than displaying the data on the BioSense Web site. According to CDC officials, they will provide raw data to public health agencies upon request, have increased the number of data sets available, and have expanded the scope of user support by (1) increasing communications with state and local public health departments in the use of and response to daily surveillance data patterns, (2) monitoring data during special events (e.g., a presidential inauguration and sporting events) at state and local request, and (3) contracting with John Hopkins University for development of a standard operating procedure for monitoring and using early event detection. National Environmental Public Health Tracking Network (NEPHTN). Initiated in 2001, NEPHTN is still in the planning stage. CDC is planning to begin development of the network in 2006 and implementation of phase one in 2008. This initiative involves intra- and interagency collaboration among CDC and other federal agencies. CDC established a memorandum of understanding in 2003 with the Environmental Protection Agency (EPA) to coordinate activities relating to EPA’s National Environmental Information Exchange Network and CDC’s National Environmental Public Health Tracking Network. To date, three collaborative projects have been initiated: (1) a demonstration project in the Atlanta metropolitan area to test data linkage methods and utility of linked data; (2) a project to evaluate how different types of air quality characterization data can be used to link environmental and public health data; and (3) a project in New York to examine specific technical interoperability issues that would affect data exchange between EPA’s and CDC’s networks. As envisioned, NEPHTN will be a distributed, secure, Web-based network that will provide access to environmental and health data that are collected by a wide variety of agencies, such as individual state networks. Once established, it should also provide access to environmental, health, and linked environmental-health data from both centralized and decentralized data stores and repositories, implementing a common data vocabulary to support electronic data exchanges within states, and across state, regions, and nationally. Outbreak Management System. The Outbreak Management System is an application designed for case tracking during the investigation of disease outbreaks. Initially developed for use by CDC, the system is now available for use by state and local public health agencies. The project began as the Bioterrorism Field Response Application and was scoped to include only requirements related to bioterrorism response by CDC-deployed field teams. Since its inception in 2002, the scope has been broadened to include any epidemiologic investigation where standard data collection and data sharing would be advantageous. However, although the system is in use at CDC, none of the state and local public health officials with whom we spoke use the system—either because it cannot exchange data with other software applications, or because these agencies have their own capability for tracing cases of infectious diseases. According to CDC officials, the use of the Outbreak Management System is provided as an option for state and local public health agencies. Although only CDC and one state agency have used the application in support of outbreaks, four state agencies and one federal entity have evaluated the software for potential use and may implement it in the future. LRN Results Messenger. CDC’s LRN Results Messenger utility is used by DHS’s BioWatch initiative for transmitting data to CDC; however, it is burdensome to use, according to the BioWatch cities included in our review (BioWatch is discussed in more detail in the next section of this report). According to CDC, it anticipates releasing the next version of the LRN Results Messenger in September 2005, which should address the usability issues. PHIN Messaging System. The PHIN Messaging System is available for use, but only CDC and a few states and local public health agencies use it. As of March 1, 2005, 51 organizations used it, according to CDC. As yet, only BioWatch, the NEDSS Base System, and the Laboratory Response Network use PHIN Messaging; according to CDC, these are the major systems that support preparedness needs, and it is focusing on these systems first. DHS is also pursuing two major biosurveillance IT initiatives—the National Biosurveillance Integration System and the Biological Warning and Incident Characterization System (BWICS). The BWICS initiative, in addition, is associated with three other biosurveillance programs. Of these five, one is operational, but it has interoperability and other limitations, one is a demonstration project, and three are in development. All five were initially under the oversight of DHS’s S&T Directorate; one is now the responsibility of the directorate for Information Analysis and Infrastructure Protection. Table 8 briefly describes the status and plans of DHS’s biosurveillance IT initiatives for the current fiscal year. Most of DHS’s biosurveillance IT initiatives are still being planned or developed. Figure 3 shows time lines for the five DHS IT initiatives. The one DHS surveillance initiative that is operational—BioWatch—is an environmental monitoring system that was developed and implemented within a 3-month period, according to DHS officials. DHS originally intended for local public health agencies to process and analyze all BioWatch data; however, at CDC’s request, DHS agreed to share data with CDC for inclusion in BioSense. BioWatch consists of three IT components: One component of BioWatch tracks the environmental samples as they are collected; it was developed by the Department of Energy’s Los Alamos National Laboratory. A second component performs sample testing and reports the results; this is a commercial product. The third component, CDC’s LRN Results Messenger, transmits the test results from the laboratory that processes the samples to CDC for analysis. As deployed, none of these three components could exchange data electronically, so that redundant, manual data entry has been required to transfer data among the three systems. State and local public health officials in BioWatch locations told us that they were dissatisfied with the deployment of BioWatch because of this need for repetitive data entry and because they were not involved in the system’s planning and implementation. DHS hired a contractor to resolve BioWatch’s interoperability problem, and DHS officials now report that they have begun implementing the resulting technical improvements in BioWatch laboratories. Additionally, EPA’s Inspector General’s Office recently reported that the agency did not provide adequate oversight of sampling operations for BioWatch to ensure that quality assurance guidance was adhered to, potentially affecting the quality of the samples taken; DHS officials state that this oversight issue has now been resolved. In the broader context of environmental monitoring, questions exist about detection capabilities for environmental surveillance. As we reported in May 2003, real-time detection and measurement of biological agents in the environment is challenging because of the number of potential agents to be identified, the complex nature of the agents themselves, the countless number of similar micro-organisms that are a constant presence in the environment, and the minute quantities of pathogen that can initiate infection. In May 2004, the Department of Defense reported that the capability for real-time detection of biological agents is currently unavailable and is unlikely to be achieved in the near to medium term. A second initiative, the BioWatch Signal Interpretation and Integration Program (BWSIIP), was established to respond to user needs regarding BioWatch. According to DHS, the initiative is intended to develop a system that will help BioWatch jurisdictions to better understand the public health or national security implications of a confirmed positive result for a biological agent from BioWatch, as well as to respond appropriately. BWSIIP is to be implemented by a consortium, initiated in 2004, that includes Carnegie Mellon University, the University of Pittsburgh, and the John Hopkins University Applied Physics Laboratory. The current BWSIIP pilot is scheduled for completion in fiscal year 2006. After DHS transitions BWSIIP to the BWICS initiative, local public health agencies will use locally available applications or tools provided by DHS for that function. For the two remaining major biosurveillance IT initiatives, DHS is still developing requirements (lessons learned from its one demonstration project, BioNet, are being incorporated into BWICS). BWICS, is to integrate data from environmental monitoring and health surveillance systems, and the pilot is expected to be completed in fiscal year 2006, according to DHS officials. DHS did not complete requirements development in the two pilot cities as scheduled, and it recently changed one of the original pilot cities, requiring a new start in requirements development in the new location. After the pilot, DHS is planning to expand BWICS beyond the two pilot cities to other BioWatch locations. The National Biosurveillance Integration System is intended to connect the various federal surveillance systems to DHS’s Homeland Security Operations Center. DHS S&T developed the system requirements and design and transferred the initiative to the Directorate for Information Analysis and Infrastructure Protection in December 2004 for implementation. Despite federal, state, and local government efforts to strengthen the public health infrastructure and improve the nation’s ability to detect, prevent, and respond to public health emergencies, important challenges continue to constrain progress. First, the national health care IT strategy and federal health architecture are still being developed; CDC and DHS will face challenges in integrating their public health IT initiatives into these ongoing efforts. Second, although federal efforts continue to promote the adoption of data standards, developing such standards and then implementing them are challenges for the health care community. Third, these initiatives involve the need to coordinate among federal, state, and local public health agencies, but establishing effective coordination among the large number of disparate agencies is a major undertaking. Finally, CDC and DHS face challenges in addressing specific weaknesses in IT planning and management that may hinder progress in developing and deploying public health IT initiatives. In May 2003, we recommended that the Secretary of HHS, in coordination with other key stakeholders, establish a national IT strategy for public health preparedness and response that should identify steps toward improving the nation’s ability to use IT in support of the public health infrastructure. Among other things, we stated that HHS should set priorities for information systems, supporting technologies, and other IT initiatives. Since then, HHS appointed a National Coordinator for Health IT in May 2004 and issued a framework for strategic action in July 2004. This framework is a first step in the development of a national health IT strategy. Goal four of the framework is directed at improvements in public health and states that these improvements require the collection of timely, accurate, and detailed clinical information to allow for the evaluation of health care delivery and the reporting of critical findings to public health officials. Two of the strategies outlined by HHS are aimed at achieving this goal: (1) unifying public health surveillance architectures to allow for the exchange of information among health care organizations, organizations they contract with, and state and federal agencies and (2) streamlining quality and health status monitoring to allow for a more complete look at quality and other issues in real time and at the point of care. The framework for strategic action states that the key challenge in harmonizing surveillance architectures is to identify solutions that meet the reporting needs of each surveillance function, yet work in a single integrated, cost- effective architecture. Like the national health care IT strategy, the federal health architecture is still evolving, according to HHS officials in the Office of the National Coordinator for Health IT. Initially targeting standards for enabling interoperability, the federal health architecture is intended to provide a structure for bringing HHS’s divisions and other federal agencies together. As part of achieving HHS’s public health goal of unifying public health surveillance architectures, the federal health architecture program established a work group on public health surveillance that is responsible for recommending a target architecture related to disease surveillance to serve as the framework within the federal sector for developing and implementing public health surveillance systems. The newly formed work group, chaired by CDC and the Department of Veterans Affairs, met for the first time in December 2004. Because the new work group is so recently formed, plans are still being developed to address how CDC’s PHIN initiative and DHS’s IT initiatives will integrate with the national health IT strategy, such as plans to establish regional health information organizations. In the absence of a completed strategy for public health surveillance efforts, state and local public health officials have raised concerns about duplication of effort across federal agencies. Some of the surveillance initiatives in our review address similar functionality and may duplicate ongoing efforts at other federal, state, and local agencies: for example, the use and development of syndromic surveillance systems. CDC is implementing BioSense at the national level, DHS is assisting local public health agencies in implementing local syndromic surveillance systems such as ESSENCE or RODS as part of its biosurveillance initiatives, and many state and local public health agencies have their own ongoing syndromic surveillance systems. As we have reported, syndromic surveillance systems are relatively costly to maintain compared with other types of disease surveillance and are still largely untested. According to HHS, with regard to BioSense, the agency is taking steps to mitigate costs and risk. State and local public health officials also expressed concern about the federal government’s ability to conduct syndromic surveillance, because they see this type of surveillance as an inherently local function. Furthermore, last year the Council of State and Territorial Epidemiologists reported that while state health departments are given some guidance and leeway to use federal funding to enhance and develop their own disease surveillance activities, no focused mechanism has been established for states to share ideas and experiences with each other and with CDC to determine what has or has not worked, and what efforts are feasible and worth expanding. The Council recommended that to enhance bioterrorism-related surveillance objectives, HHS and CDC form a bioterrorism surveillance initiative steering committee to review current federal surveillance initiatives affecting state and local health departments; to review state-developed surveillance systems; and to recommend surveillance priorities for continuation of funding, further development, or implementation. HHS and CDC have taken steps to respond to these recommendations, but according to the Council, it is not yet satisfied that HHS and CDC have fully addressed its concerns. While HHS and other key federal agencies are organizing themselves to develop a strategy for public health surveillance and interoperability, decisions regarding development and implementation are being made now without the benefit of an accepted national health IT strategy that integrates public health surveillance-related initiatives. In the case of BioSense, these decisions affect the spending of about $50 million this fiscal year and an unknown amount in future years. Until a strategy and accompanying architecture are developed, major public health IT initiatives will continue to be developed without an overall, coordinated plan and are at risk of being duplicative, lacking interoperability, and exceeding cost and schedule estimates. In May 2003, we recommended that the Secretary of HHS, as part of his efforts to develop a national strategy, (1) define activities for ensuring that the various standards-setting organizations coordinate their work and reach further consensus on the definition and use of standards, (2) establish milestones for defining and implementing all standards, and (3) create a mechanism to monitor the implementation of standards throughout the health care industry. To support the compatibility, interoperability, and security of federal agencies’ many planned and operational IT systems, the identification and implementation of data, communications, and security standards for health care delivery and public health are essential. As we testified in July 2004, HHS has made progress in identifying standards. While federal action to promote the adoption of these standards continues, the identification and implementation of these standards are an ongoing process. Despite progress in defining health care IT standards, several implementation challenges remain to be worked out, including the establishment of milestones. Currently, no formal mechanisms are in place to ensure coordination and consensus among these initiatives at the national level. HHS officials agree that leadership and direction are still needed to coordinate the various standards-setting initiatives and to ensure consistent implementation of standards for health care delivery and public health. Within the federal health architecture structure, the Consolidated Health Informatics initiative is focused on the adoption of data and communication standards to be used by federal agencies to achieve interoperability of IT within health IT initiatives. In March 2003, the Consolidated Health Informatics initiative announced the adoption of 5 standards, and in May 2004, it announced the adoption of another 15 standards. Some of these standards are included as PHIN standards. As of March 1, 2005, CDC has adopted several industry standards and published specifications for PHIN; these standards are grouped by type in table 9. CDC has also initiated a PHIN certification process for its partners (e.g., state and local public health agencies), which is intended to establish whether state and local systems can meet standards for the PHIN preparedness functional areas. In the future, CDC plans to require system owners to first perform self-assessment reviews to ensure that systems meet PHIN standards, followed by reviews by CDC certification teams to confirm PHIN compatibility. To be functionally compatible, systems must be capable of supporting the standards outlined for each PHIN functional area; accordingly, partners must demonstrate that their systems have this capability. In general, state and local public health officials consider the PHIN initiative to be a good framework for organizing the necessary standards for public health interoperability. Most of the state and local officials we spoke with agreed that CDC has done a commendable job of adopting and promoting standards for IT in selected programs. In addition, they agreed that CDC should continue to take a leadership role in pressing for industry standards and providing guidance to states and local entities. However, several officials stated that CDC should focus more of its attention on setting standards and less on developing software applications, which generally do not meet their needs and are not compatible with their specific IT environments. CDC officials say that it is important both to promote the use of industry standards and to develop software applications, especially for state and local public health agencies that have limited IT resources. Although federal efforts to promote the adoption of these standards continue, their identification and implementation are an ongoing process. Several implementation challenges remain, including coordination of the various efforts to ensure consensus on standards and establishment of milestones. Until these challenges are addressed, federal agencies will not be able to ensure that their systems can exchange data with other systems when needed. In defining system requirements, federal agencies are challenged by the need to involve such key stakeholders as state and local public health agencies, which are expected to use these systems for reporting data to the federal government. For example, most participating local government agencies and state public health laboratories were told to implement the BioWatch initiative in their metropolitan areas and were given the procedures and software to use for sample management and data collection. According to some public health officials, BioWatch was implemented without a plan for how states and localities would respond to a positive test result, and they were left to develop a response plan after BioWatch had been deployed. One metropolitan area did not implement BioWatch for a year after it became operational, because officials did not have a response plan in place and did not want to be responsible for responding to a potential incident without a plan for handling positive test results. According to DHS officials, since local officials had received funds for emergency preparedness, it was their understanding that BioWatch locations had response plans in place; DHS officials have since developed a methodology to target funds for specific purposes, such as response plans. CDC has been challenged by the need to coordinate with a diverse range of state and local public health agencies. For example, CDC has found that it is difficult to implement “standard” systems that would address the full range of different needs and levels of IT resources available at the state level. HHS officials told us that the agency strives to address this challenge by developing applications that are based on industry standards. It also provides the standards and specifications to state and local agencies so that they can build or purchase their own systems that can conform to PHIN standards. Nonetheless, there was consensus among many of the state and local officials in our review that federal agencies did not obtain adequate input from state and local officials. A few state officials with whom we spoke said that CDC does not appropriately consider their need to comply with existing state IT architectures. In addition, in an informal e-mail survey, a small group of state chief information officers agreed that federal agencies do not take into consideration state IT architectures. According to the Council of State and Territorial Epidemiologists, no mechanism has yet been established for state and federal partners to collaboratively review initiatives developed over the past 3 years and plan for the future. Instead, the approach to system design and implementation remains top-down, mainly focused on expanding federally designed syndromic surveillance for early outbreak detection without critical review of its usefulness and cost and without systematic review of state-originated systems and needs. The result is that public health responders may not buy in to and use the federally designed systems, potentially constructive state- originated ideas may not get recognition and wider application, and national bioterrorism-related surveillance will be suboptimal. According to CDC, as part of its efforts to obtain state and local input, it hosts an annual PHIN conference and holds meetings with business partner organizations, such as a recent series of meetings on PHIN preparedness requirements with selected state and local officials. In addition, under CDC’s new organizational structure, the new National Center for Public Health Informatics has a division for communications and collaboration with its partners. Further, CDC and DHS have coordinated with each other on specific projects, but that coordination has not been optimal, according to officials from both agencies. According to DHS officials, federal agencies are planning to meet within the next few months to discuss this issue. When asked about their experiences with coordination between CDC and DHS on public health IT initiatives, some of the state and local public health officials included in our review expressed concerns about coordination between the two agencies; one expressed confusion about their roles. Until CDC and DHS establish close coordination on federal public health IT, and state and local public health agencies are more actively involved in the definition and coordination of federal efforts, the effectiveness of the information systems intended to improve disease surveillance and communications may be inadequate. A challenge that both HHS and DHS face in implementing public health IT initiatives is ensuring their effective planning and management. This requires mature, repeatable systems development and acquisition processes to increase the likelihood that projects will be delivered on time and within budget. Key elements of information and technology management include (1) IT investment management and (2) systems development and acquisition management. To help federal agencies address these key elements, we and the Office of Management and Budget have developed guidance that provides a framework on the use of rigorous and disciplined processes for planning, managing, and controlling IT resources. We have previously reported on specific weaknesses at both HHS and DHS, including the lack of robust processes for IT investment management and immature systems development and acquisition practices. We made recommendations to HHS and DHS aimed at improving these practices. HHS and CDC have recently taken steps to improve their control over IT projects, which is an important aspect of IT investment management. Because PHIN and some of its initiatives (i.e., BioSense, NEDSS, the Health Alert Network, and NEPHTN) are considered major investments for fiscal year 2006, they required review by HHS. The HHS IT Investment Review Board conducted budgetary reviews for these applications in June 2004 and recommended that the projects move forward as major IT investments; however, there is no documentation that additional HHS reviews were conducted on PHIN and its major applications until this past February, when HHS began implementing procedures for better monitoring of system development projects. In January 2004, CDC announced its intention to provide greater executive level oversight of IT investments, but it had been reorganizing and did not begin conducting control reviews for major PHIN investments until recently. In May 2004, CDC announced its new center for public health informatics to better coordinate IT projects; this center was formally recognized as operational as of mid-April 2005 when Congress approved CDC’s reorganization. Until CDC and HHS management provides a systematic method for IT investment reviews, they will have difficulty minimizing risks while maximizing returns on these critical public health investments. Regarding CDC’s systems development and acquisition practices, we observed weaknesses in project management that may hinder progress toward achieving PHIN objectives. For some of the projects in this review, we received limited documentation of project managers’ tracking actual dates against baseline schedules, and it appeared that a number of projects had missed internal schedule dates. In November 2004, CDC started requiring project managers to provide status reports to its program management activity office on a biweekly basis. These reports are now required for five of the systems in our review. CDC officials acknowledged that project dates had to be rebaselined; after the rebaselining, CDC officials stated that their projects met official release dates. Early last year, CDC recognized the need for more direct executive involvement in IT governance and management. This fiscal year, CDC began implementing a project management office to oversee public health informatics projects. Establishing this office and institutionalizing its processes while managing new and ongoing IT projects will be a challenge. The new office has initiated new processes to manage project interdependencies, document and track milestones for projects, and formalize project change requests. For example, the office is beginning to track projects biweekly—asking project managers to report on upcoming milestones, their confidence that those milestones will be met, issues for executive attention, staffing problems, and other potential problems. CDC is also implementing a process to standardize project management across the agency. This process is designed to incorporate, among other things, program and project management, capital planning, security certification and accreditation, and system development life-cycle processes. DHS has been operational for just over 2 years, and the department has made progress in establishing key information and technology disciplines. However, as we have reported, these disciplines are not yet fully established and operational. For example, DHS has established an IT investment management process, but this process is still maturing. DHS has also had problems consistently employing rigorous systems development and acquisition practices. DHS did not provide documentation of its oversight of its public health IT investments. According to DHS officials, they plan to submit a capital asset plan and business case for the BWICS initiative this year for review and approval by the DHS IT review board. However, until DHS follows through on its initial actions to address its management, programmatic, and partnering challenges, its IT investments remain at risk. The federal government has made progress on major public health IT initiatives, but significant work remains to be done. CDC’s PHIN initiative includes applications at various stages of implementation; as a whole, however, it remains years away from fully achieving its planned improvement to the public health IT infrastructure. In addition, DHS’s initiatives are still in such early stages that it is uncertain how they will improve public health preparedness. Federal agencies face many challenges in improving the public health infrastructure. CDC and DHS are pursuing related initiatives, but there is little integration among them, and until the national health IT strategy is completed, it is unknown how their integration will be addressed. Implementing health data standards across the health care community is still a work in progress, and until these standards are implemented, information sharing challenges will remain. In addition, state and local public health agencies report that their coordination with federal initiatives is often limited. Until state and local public health agencies are more actively involved in coordination with their federal counterparts, disease surveillance systems will remain fragmented and their effectiveness will be impeded. Finally, the development of robust practices for IT investment management and for systems development and acquisition is a continuing challenge for HHS and DHS, about which we have previously made recommendations. Until agencies address all these challenges, progress toward building a stronger public health infrastructure will be limited, as will the ability to share essential information concerning public health emergencies and bioterrorism. In order to improve the development and implementation of major public health IT initiatives, we recommend that the Secretary of Health and Human Services take the following two actions: ensure that the federal initiatives are (1) aligned with the national health IT strategy, the federal health architecture, and ongoing public health IT initiatives and (2) coordinated with state and local public health initiatives and ensure federal actions to encourage the development, adoption, and implementation of health care data and communication standards across the health care industry to address interoperability challenges associated with the exchange of public health information. We also recommend that the Secretary of Homeland Security align existing and planned DHS IT initiatives with other ongoing public health IT initiatives at HHS, including adoption of data and communications standards. We received written comments on a draft of this report from the Acting Inspector General at HHS and Director of the Departmental GAO/OIG Liaison at DHS (these comments are reproduced in app. III and IV). HHS generally concurred with our recommendations, while DHS did not comment specifically on the recommendations. Both agencies provided additional contextual information and technical comments, which we have incorporated in this report as appropriate. We provided DOD officials with the opportunity to comment on a draft of this report, which they declined. Among its comments, HHS officials stated that this report does not adequately represent the department’s accomplishments in implementing standards and specifications for health IT or the benefits of pursuing a standards-based approach. We concur with HHS on the importance of standards for health information technology and have been calling for federal leadership in expediting standards since 1993. Page 61 lists GAO reports on health IT, several of which address the benefits of standards and the need for a national health IT strategy. In response to HHS’s comment that we suggest that early event detection is duplicative or irrelevant at the federal level, neither we nor the state and local public health officials suggest that early event detection at the federal level is irrelevant. Rather, we are reporting the concerns of state and local public health officials regarding the federal government’s role, which merits further discussion and more involvement of state and local health officials. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of the report to other congressional committees. We will also send copies to the Secretaries of Health and Human Services, Homeland Security, Defense, and Energy. In addition, copies will be sent to the state and local public health agencies that were included in our review. Copies will also be made available at no charge on our Web site at www.gao.gov. If you have any questions on matters discussed in this report, please contact me at 202-512-9286 or by e- mail at pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. The objectives of our review were to assess the progress of major federal information technology (IT) initiatives designed to strengthen the effectiveness of the public health infrastructure and describe the key IT challenges facing federal agencies responsible for improving the public health infrastructure. To address these objectives, we conducted our work at Health and Human Services (HHS), Department of Homeland Security (DHS), and Department of Defense (DOD) offices in Washington, D.C., and the Centers for Disease Control and Prevention (CDC) in Atlanta. We selected specific IT initiatives to review from systems we identified in previous work, focusing on major public health IT initiatives in surveillance and communication systems. We excluded food safety systems and DOD disease surveillance systems that did not include civilian populations. We discussed our selection with federal officials to help ensure that we were addressing the most relevant major initiatives. To assess the progress of major federal IT initiatives designed to strengthen the effectiveness of the public health infrastructure, we analyzed agency documents such as Office of Management and Budget’s Exhibit 300s, minutes of executive council meetings, and system development documents, including project plans, functional requirements, and cost-benefit analyses. We supplemented our evaluation of agency documents with interviews of federal officials. Through interviews with these officials and with state and local public health officials, we also assessed CDC’s and DHS’s interaction and coordination with each other on their IT initiatives. Because these federal initiatives affect state and local public health agencies, we supplemented our analysis of agency documentation by interviewing officials from six state and six local public health agencies on progress being achieved by CDC and DHS. We conducted our work at the San Diego County Health and Human Services Agency; the California Department of Health Services in Sacramento; the Thurston County Public Health and Social Services and the Washington State Department of Health in Olympia; the Austin/Travis County Health and Human Services Department and the Texas Department of State Health Services in Austin; the Milwaukee City Health Department; the Wisconsin Department of Health and Family Services in Madison, Wisconsin; the Boston Public Health Commission and the Commonwealth of Massachusetts Department of Public Health in Boston; the New York State Department of Health in Albany; and the New York City Department of Health and Mental Hygiene. The states and local public health agencies were selected because they were actively involved in implementing at least one of CDC’s Public Health Information Network IT applications. We interviewed them on the impact of federal IT initiatives on state and local public health operations and lessons they learned from integrating federal IT initiatives into their local public health infrastructure. If they had systems similar to the federal systems in our review, we discussed how their systems compared with the federal initiatives. We also interviewed representatives of several public health professional organizations, which CDC considers its partners, such as the National Association of County and City Health Officials, the Association of State and Territorial Health Officials, the Council for State and Territorial Epidemiologists and the Association of Public Health Laboratories. We also had a discussion with the National Association of State Chief Information Officers. To identify key IT challenges facing federal agencies responsible for improving the public health infrastructure, we analyzed published GAO reports, agency documents, and other information obtained during interviews and site visits. We summarized the results of our evaluation and identified the key challenges that CDC and DHS have consistently encountered as they implement the IT initiatives included in our review. Our work was performed from July 2004 through April 2005 in accordance with generally accepted government auditing standards. The Department of Health and Human Services (HHS) has primary responsibility for coordinating the nation’s response to public health emergencies, including bioterrorism. HHS divisions responsible for bioterrorism preparedness and response, and their primary responsibilities, include the following: The Office of the Assistant Secretary for Public Health Emergency Preparedness coordinates the department’s work to oversee and protect public health, including cooperative agreements with states and local governments. States and local governments can apply for funding to upgrade public health infrastructure and health care systems to better prepare for and respond to bioterrorism and other public health emergencies. The office maintains a command center where it can coordinate the response to public health emergencies from one centralized location. This center is equipped with satellite teleconferencing capacity, broadband Internet hookups, and analysis and tracking software. The Centers for Disease Control and Prevention (CDC) has primary responsibility for nationwide disease surveillance for specific biological agents, developing epidemiological and laboratory tools to enhance disease surveillance, and providing an array of scientific and financial support for state infectious disease surveillance, prevention, and control. CDC has an emergency operations center to organize and manage all of its emergency operations, allowing for immediate communication with HHS, the Department of Homeland Security, federal intelligence and emergency response officials, and state and local public health officials. CDC also provides testing services and consultation that are not available at the state level; training on infectious diseases and laboratory topics, such as testing methods and outbreak investigations; and grants to help states conduct disease surveillance. In addition, CDC provides state and local health departments with a wide range of technical, financial, and staff resources to help maintain or improve their ability to detect and respond to disease threats. The Food and Drug Administration is responsible for safeguarding the food supply, ensuring that new vaccines and drugs are safe and effective, and conducting research on diagnostic tools and treatment of disease outbreaks. It is increasing its food safety responsibilities by improving its laboratory preparedness and food monitoring inspections. The Agency for Healthcare Research and Quality is responsible for supporting research designed to improve the outcomes and quality of health care, reduce its costs, address safety and medical errors, and broaden access to effective services, including antibioterrorism research. It has initiated several major projects and activities designed to assess and enhance linkages between the clinical care delivery system and the public health infrastructure. Research focuses on emergency preparedness of hospitals and health care systems for bioterrorism and other public health events; technologies and methods to improve the linkages among the personal health care system, emergency response networks, and public health agencies; and training and information needed to prepare clinicians to recognize the symptoms of bioterrorist agents and manage patients appropriately. The National Institutes of Health is responsible, among other things, for conducting medical research in its own laboratories and for supporting the research of nonfederal scientists in universities, medical schools, hospitals, and research institutions throughout the United States and abroad. Its National Institute of Allergy and Infectious Diseases has a program to support research related to organisms that are likely to be used as biological weapons. The Health Resources Services Administration is responsible for improving the nation’s health by ensuring equal access to comprehensive, culturally competent, quality health care. Its Bioterrorism Hospital Preparedness program administers cooperative agreements to state and local governments to support hospitals’ efforts toward bioterrorism preparedness and response. The Department of Homeland Security (DHS) is responsible for, among other things, protecting the United States against terrorist attacks. One activity undertaken by DHS is coordination of surveillance activities of federal agencies related to national security. The Science and Technology Directorate serves as the primary research and development arm of DHS, using our nation’s scientific and technological resources to provide federal, state, and local officials with the technology and capabilities to protect the nation. The focus is on catastrophic terrorism—threats to the security of our homeland that could result in large-scale loss of life and major economic impact. The directorate’s work is designed to counter those threats, both by improvements to current technological capabilities and development of new, revolutionary technological capabilities. The Information Analysis and Infrastructure Protection Directorate is responsible for helping to deter, prevent, and mitigate acts of terrorism by assessing vulnerabilities in the context of continuously changing threats. It strengthens the nation’s protective posture and disseminates timely and accurate information to federal, state, local, private, and international partners. The Emergency Preparedness and Response Directorate is responsible for the National Incident Management System, which establishes standardized incident management processes, protocols, and procedures that all responders—federal, state, local and tribal—will use to coordinate and conduct response actions. The Department of Defense, while primarily responsible for the health and protection of its service members, contributes to global disease surveillance, training, research, and response to emerging infectious disease threats. The Defense Threat Reduction Agency provides technical expertise and capabilities in combat support, technology development, threat control and threat reduction, including chemical and biological defense. The United States Army Medical Research Institute of Infectious Diseases conducts biological research dealing with militarily relevant infectious diseases and biological agents. It also provides professional expertise on issues related to technologies and other tools to support readiness for a bioterrorist incident. The Department of Energy is developing new capabilities to counter chemical and biological threats. It expects the results of its research to be public and possibly lead to the development of commercial products in the domestic market. The Chemical and Biological National Security Program has conducted research on biological detection, modeling and prediction, and biological foundations to support efforts in advanced detection, attribution, and medical countermeasures. The national research laboratories (e.g., Lawrence Livermore, Los Alamos, and Sandia) are developing new capabilities for countering chemical and biological threats, including biological detection, modeling, and prediction. The Department of Agriculture (USDA) is responsible for protecting and improving the health and marketability of animals and animal products in the United States by preventing, controlling, and eliminating animal diseases. USDA’s disease surveillance and response activities are intended to protect U.S. livestock and ensure the safety of international trade. In addition, USDA is responsible for ensuring that meat, poultry, and certain processed egg products are safe and properly labeled and packaged. USDA establishes quality standards and conducts inspections of processing facilities in order to safeguard certain animal food products against infectious diseases that pose a risk to humans. The Agricultural Research Service conducts research to improve onsite rapid detection of biological agents in animals, plants, and food and has improved its detection capability for diseases and toxins that could affect animals and humans. The Food Safety Inspection Service provides emergency preparedness for foodborne incidents, including bioterrorism. The Animal and Plant Health Inspection Service has a role in responding to biological agents that cause zoonotic diseases (i.e., diseases transmitted from animals to humans). It also has veterinary epidemiologists to trace the source of animal exposures to diseases. The Environmental Protection Agency (EPA) has responsibilities to prepare for and respond to emergencies, including those related to biological materials. EPA can be involved in detection of agents by environmental monitoring and sampling. It is also responsible for protecting the nation’s water supply from terrorist attack and for prevention and control of indoor air pollution. The Department of Veterans Affairs (VA) manages one of the nation’s largest health care systems and is the nation’s largest drug purchaser. The department purchases pharmaceuticals and medical supplies for the Strategic National Stockpile and the National Medical Response Team stockpile. The VA Emergency Preparedness Act of 2002 directed VA to establish at least four medical emergency preparedness centers to (1) carry out research and develop methods of detection, diagnosis, prevention, and treatment for biological and other public health and safety threats; (2) provide education, training, and advice to health care professionals inside and outside VA; and (3) provide laboratory and other assistance to local health care authorities in the event of a national emergency. The following are GAO’s comments on the Department of Health and Human Services letter dated June 3, 2005. 1. We agree with HHS that the cost benefits of a standards-based approach to public health systems are potentially considerable. However, as we have reported before, the Center for Information Technology Leadership acknowledges that their cost estimates are based on a number of assumptions and inhibited by limited data that are neither complete nor precise. 2. We agree with HHS that standards-based systems provide important benefits. In our May 2003 report, we made several recommendations regarding the establishment and use of standards that are highlighted in this report. We also state that to support the compatibility, interoperability, and security of federal agencies’ many planned and operational IT systems, the identification and implementation of data, communications, and security standards for health care delivery and public health are essential. 3. HHS states that our report does not mention a number of activities related to the Federal Health Architecture and the Consolidated Health Informatics initiative. We described the status of workgroup efforts specific to public health surveillance. In terms of the standards adopted by the Consolidated Health Informatics initiative, we presented the relevant standards in our table of industry standards used by the Public Health Information Network. We disagree with HHS that the paragraph needs to be revised. While the development of standards and policies is a key component of progress toward the implementation of a national health IT strategy, the development of a national strategy and corresponding federal architecture is equally important. 4. We disagree with HHS that we should delete our discussion of the concerns of state and local public health officials regarding duplication of effort across federal agencies. Neither we nor the state and local public health officials suggest that early event detection at the federal level is irrelevant. Rather, we are reporting the concerns of state and local public health officials regarding the federal government’s role, which merits further discussion and more involvement of state and local health officials. 5. We have adjusted our report to indicate that fiscal year 2006 costs for BioSense are unknown. 6. HHS comments that not moving forward with its technology initiatives presents greater risk than waiting for a completed national health IT strategy. We are not suggesting that HHS stop its ongoing activities; we only point out the risks associated with developing and implementing major IT initiatives without a coordinated strategy in place. The following is GAO’s comment on the Department of Homeland Security’s letter dated June 3, 2005. 1. We disagree with DHS’s statement that we erroneously categorize its initiatives as still in the early states. The initiatives that we are referring to as being in the early stages are the Biological Warning and Incident Characterization System and the National Biosurveillance Integration System, which according to DHS officials are considered their two major IT initiatives. DHS categorized them as being in development. In addition to those named above, Barbara S. Collier, Neil J. Doherty, Amanda C. Gill, M. Saad Khan, Gay Hee Lee, Mary Beth McClanahan, M. Yvonne Sanchez, and Morgan Walts made key contributions to this report. Health Information Technology: HHS Is Taking Steps to Develop a National Strategy. GAO-05-628. Washington, D.C.: May 27, 2005. Health and Human Services’ Estimate of Health Care Cost Savings Resulting from the Use of Information Technology. GAO-05-309R. Washington, D.C.: February 17, 2005. HHS’s Efforts to Promote Health Information Technology and Legal Barriers to its Adoption. GAO-04-991R. Washington, D.C.: August 13, 2004. Health Care: National Strategy Needed to Accelerate the Implementation of Information Technology. GAO-04-947T. Washington, D.C.: July 14, 2004. Information Technology: Benefits Realized for Selected Health Care Functions. GAO-04-224. Washington, D.C.: October 31, 2003. Bioterrorism: Information Technology Strategy Could Strengthen Federal Agencies’ Abilities to Respond to Public Health Emergencies. GAO-03-139. Washington, D.C.: May 30, 2003. Automated Medical Records: Leadership Needed to Expedite Standards Development. GAO/IMTEC-93-17. Washington, D.C.: April 30, 1993.
It has been almost 4 years since the anthrax events of October 2001 highlighted the weaknesses in our nation's public health infrastructure. Since that time, emerging infectious diseases have appeared--such as Severe Acute Respiratory Syndrome and human monkeypox--that have made our readiness for public health emergencies even more critical. Information technology (IT) is central to strengthening the public health infrastructure through the implementation of systems to aid in the detection, preparation for, and response to bioterrorism and other public health emergencies. Congress asked us to review the current status of major federal IT initiatives aimed at strengthening the ability of government at all levels to respond to public health emergencies. Specifically, our objectives were to assess the progress of major federal IT initiatives designed to strengthen the effectiveness of the public health infrastructure and describe the key IT challenges facing federal agencies responsible for improving the public health infrastructure. Federal agencies have made progress on major public health IT initiatives, although significant work remains to be done. These initiatives include one broad initiative at CDC--the Public Health Information Network (PHIN) initiative--which is intended to provide the nation with integrated public health information systems to counter national civilian public health threats, and two major initiatives at the Department of Homeland Security (DHS), which are primarily focused on biosurveillance. CDC's broad PHIN initiative encompasses a number of applications and initiatives, which show varied progress. Currently, PHIN's basic communications systems are in place, but it is unclear when its surveillance systems and data exchange applications will become fully deployed. Further, the overall implementation of PHIN does not yet provide the desired functionality, and so some applications are not widely used by state and local public health officials. For example, CDC's BioSense application, which is aimed at detecting early signs of disease outbreaks, is available to state and local public health agencies, but according to the state and local officials with whom we spoke, it is not widely used, primarily because of limitations in the data it currently collects. DHS is also pursuing two major public health IT initiatives--the National Biosurveillance Integration System and the Biological Warning and Incident Characterization System (BWICS). Both of these initiatives are still in development. The BWICS initiative, in addition, is associated with three other programs, one of which--BioWatch--is operational. This early- warning environmental monitoring system was developed for detecting trace amounts of biological materials and has been deployed in over 30 locations across the United States. Until recently, its three IT components were not interoperable and required redundant data entry in order to communicate with each other. As federal agencies work with state and local public health agencies to improve the public health infrastructure, they face several challenges. First, the national health IT strategy and federal health architecture are still being developed; CDC and DHS will face challenges in integrating their public health IT initiatives into these ongoing efforts. Second, although federal efforts continue to promote the adoption of data standards, developing such standards and then implementing them are challenges for the health care community. Third, these initiatives involve the need to coordinate among federal, state, and local public health agencies, but establishing effective coordination among the large number of disparate agencies is a major undertaking. Finally, CDC and DHS face challenges in addressing specific weaknesses in IT planning and management that may hinder progress in developing and deploying public health IT initiatives. Until all these challenges are addressed, progress toward building a stronger public health infrastructure will be impeded, as will the ability to share essential information concerning public health emergencies and bioterrorism.
Farmers are exposed to financial losses because of production risks— droughts, floods, and other natural disasters—as well as variations in the market price of their crops. The federal government has played an active role in helping to mitigate the effects of these risks on farm income by promoting the use of crop insurance. RMA has overall responsibility for administering the federal crop insurance program, including controlling costs and protecting against fraud, waste, and abuse. As of May 2014, RMA partnered with 19 private insurance companies that sell and service the program’s insurance policies and share a percentage of the risk of loss and opportunity for gain associated with the policies (known as “underwriting”). RMA administers the crop insurance program through a Standard Reinsurance Agreement that establishes the terms and conditions under which participating insurance companies sell and service federal crop insurance policies. Through the federal crop insurance program, farmers insure against losses on more than 100 crops. These crops include major crops—such as corn, cotton, soybeans, and wheat, which accounted for more than three-quarters of the acres enrolled in the program in 2012—as well as nursery crops and certain fruits and vegetables. More specifically, according to RMA, federal crop insurance penetration based on planted acres is high for the principal crops of corn, soybeans, wheat, and cotton. For example, in 2012, about 84 percent of the planted principal crops were insured under the federal crop insurance program. More specifically, corn acreage was 84 percent insured, soybean acreage was 84 percent insured, wheat acreage was 83 percent insured, and cotton acreage was 94 percent insured. Most crop insurance policies are either production-based or revenue policies. For production-based policies, a farmer can receive a payment if there is a production loss relative to the farmer’s historical production per acre. Revenue policies protect against crop revenue loss resulting from declines in production, price, or both. The federal government encourages farmers’ participation in the federal crop insurance program by subsidizing the insurance premiums and acting as the primary reinsurer for the private insurance companies that take on the risk of covering, or underwriting, losses of participating farmers. The federal government’s premium subsidies for crop insurance policies are not payments to farmers, but they can be considered a financial benefit to farmers. Without a premium subsidy, a participating farmer would have to pay the full amount of the policy premium. Congress sets premium subsidy rates, meaning the percentage of the premium paid by the government. Premium subsidy rates vary by the level of insurance coverage that the farmer chooses and the geographic diversity of crops insured. For most policies, the statutory premium subsidy rates range from 38 percent to 80 percent. Premium subsidy rates increased, as a percentage of total premiums, from an average of 37 percent in 2000 to an average of 63 percent in 2012. In addition, premium subsidies rose as crop prices increased because higher prices meant the insured value of the crop increased, and premiums are based on the value of what is insured. In addition, the federal government pays administrative and operating expense subsidies to insurance companies as an allowance that is intended to cover their expenses for selling and servicing crop insurance policies. In turn, insurance companies use these subsidies to cover their overhead expenses, such as payroll and rent, and to pay commissions to insurance agencies and their agents. Insurance companies also incur expenses associated with verifying—also called adjusting—the amount of loss claimed. These expenses include, for example, loss adjusters’ compensation and travel expenses of adjusters to farmers’ fields. The administrative expense subsidies also can be considered a subsidy to farmers; with these subsidies, crop insurance premiums are lower than they would otherwise be if the program followed commercial insurance practices. In private insurance, such as automobile insurance, these administrative expenses typically are captured through the premiums paid by all policyholders. The federal government provides crop insurance premium subsidies in part to achieve high crop insurance participation and coverage levels. Higher participation and coverage levels may reduce or eliminate the need for disaster assistance payments from congressionally authorized ad hoc disaster programs to help farmers recover from natural disasters, which can be costly. For example, under three separate congressionally authorized ad hoc disaster programs, USDA provided $7 billion in payments to farmers whose crops were damaged or destroyed by natural disasters from 2001 to 2007. Farmers’ participation in the federal crop insurance program and spending on ad hoc disaster assistance have been policy issues for more than 30 years. A 2005 USDA publicationasserts that Congress passed the Federal Crop Insurance Act of 1980 and subsequent related legislation to strengthen participation in the crop insurance program with the goal of replacing costly disaster assistance programs. According to this publication, the government has historically attempted to increase participation in the federal crop insurance program by subsidizing premiums, including increasing the level of these subsidies over time. The 2014 farm bill introduced several changes to the crop insurance program. Regarding revenue policies specifically, the legislation added peanuts to the list of crops eligible for this policy type. The legislation also made “enterprise units” a permanent option for revenue and other policy types. An enterprise unit consists of all insurable acreage of the same insured crop in the county in which the farmer has a share on the date coverage begins for the crop year. In addition, separate insurable enterprise units for both irrigated and nonirrigated crops will be available. Separating the acreage can increase risk protection for farmers because losses on dryland crops would no longer be offset by higher yields on irrigated acreage when the two are combined. The 2014 farm bill also added two new policy options to the crop insurance program—the Supplemental Coverage Option and the Stacked Income Protection Plan for upland cotton. The Supplemental Coverage Option is based on expected county yields or revenue, to cover part of the deductible under the farmer’s underlying policy (referred to as a farmer’s out-of-pocket loss or “shallow loss”). The federal subsidy as a share of the policy premium is set at 65 percent. The Stacked Income Protection Plan insures against losses in county revenue of 10 to 30 percent of expected county revenue based on the deductible level selected by the farmer for the underlying individual policy. The federal subsidy as a share of the policy premium is set at 80 percent. As of June 2014, USDA was developing implementing guidance for these new policies that it expects to issue before the start of the 2015 crop year. For now, it is uncertain how farmers will utilize these new policies and how their use will impact federal crop insurance premium costs, including for revenue policy premium subsides. Federal crop insurance program costs and farm sector income and wealth grew significantly during the period 2003 through 2012. Costs of federal crop insurance are growing due to an increase in premium subsidies, particularly for revenue policies. Farmers are increasingly purchasing revenue policies and are choosing higher coverage levels for these policies. Meanwhile, indicators of farm business economic well-being— such as farm income and real estate and asset values—all increased from 2003 through 2012. The cost of the federal crop insurance program grew significantly from 2003 through 2012, according to our analysis of RMA data. For fiscal years 2003 through 2007, federal crop insurance costs averaged $3.4 billion a year, but for fiscal years 2008 through 2012, the crop insurance program cost an average of $8.4 billion a year. There were significant drought and crop losses in crop year 2012 that contributed to the spike in government costs to $14.1 billion. These trends are shown in figure 1. According to an April 2014 CBO estimate, for fiscal years 2014 through 2023, program costs are expected to average $8.9 billion annually. In fiscal years 2003 through 2012, according to our analysis of RMA data, premium subsidies comprised approximately $42.1 billion of $58.7 billion in total government costs for federal crop insurance, or almost 72 percent of total program costs. Revenue policy premium subsidies specifically accounted for $30.9 billion of the premium subsidy costs over that period. RMA offered 17 different crop insurance policies in crop year 2012, but revenue policies were the most frequently purchased and accounted for the majority of all premium subsidies. For example, for crop year 2012, revenue policy premium subsidies cost $5.5 billion that year, which accounted for 82 percent of the $6.7 billion in total premium subsidy costs to the government. Figure 2 shows the breakdown of costs for the overall crop insurance program into premium subsidies; administrative and operating expense subsidies; and other costs, such as the salaries of RMA staff, research and development initiatives for new crop insurance products, and the net underwriting loss for the period, for fiscal years 2003 through 2012. As shown in figure 3, overall crop insurance premium subsidies more than tripled from $1.8 billion to $6.7 billion from crop years 2003 through 2012. The revenue policy premium subsidies increased from $1.1 billion in crop year 2003 to $5.5 billion in crop year 2012, a nearly 5-fold increase. The total acreage covered by federal crop insurance also continued to increase from crop year 2003 through crop year 2012, from around 183.7 million acres in 2003 to 265.2 million acres in 2012. As shown in figure 4, the amount of that acreage covered by revenue policies also increased, from about 112.2 million acres in 2003 to 180.9 million acres in 2012. In 2012, revenue policies were purchased for about 68 percent of the acres covered by federal crop insurance. Farmers have also increased their purchases of higher coverage levels of crop insurance—that is, the percentage of their normal annual revenue that they want to insure—for their revenue policies. These higher coverage levels equate to greater potential liability for the government and insurers in the case of loss and higher premium levels for the policies, both of which contribute to higher program costs. According to our analysis of RMA data, the percentage of acres insured at higher coverage levels has increased in recent years, as shown in figure 5. For example, in crop year 2003, 14.7 percent of all acres were insured under revenue policies at a coverage level of 80 percent or greater. By crop year 2012, that figure had nearly doubled, to 27.6 percent. Our analysis of RMA data showed that farmers in 10 states accounted for the majority of revenue policies purchased and, as a result, a majority of the premium subsidies in crop year 2012. As shown in figure 6, these 10 states in descending order of subsidy amounts received were Texas, North Dakota, Iowa, Minnesota, Kansas, South Dakota, Illinois, Nebraska, Missouri, and Indiana. Combined, they received almost $4.1 billion in revenue premium subsidies in crop year 2012, which was approximately 73.5 percent of the total amount of federal premium subsidies for revenue policies for that year. In crop year 2012, Texas led all states in premium subsidies, with farmers receiving more than $523.8 million in revenue premium subsidies for the approximately 11 million acres covered by revenue policies; over 60 percent of these premium subsidies and almost half of the acres covered were for cotton. The list of crops eligible for revenue policy insurance coverage has continued to grow. Table 1 shows which crops were eligible to receive revenue policy premium subsidies from crop year 2003 through crop year 2012. In crop year 2013, dry beans and dry peas also became eligible for According to RMA documents, the estimated revenue policy insurance.cost of these two additional crops was $28.3 million in revenue premium subsidies for crop year 2013. Further, as discussed, peanuts will be eligible for revenue policy coverage starting in crop year 2015. The farm economy improved from 2003 through 2012, and 2012 was a record year for farm income, due in part to high crop prices. For example, median farm household income rose from 2003 to 2012 and was higher than the median income for all U.S. households every year during this More specifically, on average, median period, according to ERS data.farm household income was $7,205, or 13.8 percent, more than median U.S. household income annually during this time period (in constant 2012 dollars that reflect adjustments for inflation). Median farm household income was 33.9 percent higher than median income for all U.S. households in 2012—$68,298 compared with $51,017. Households associated with farms specializing in cash grains such as corn or soybeans had a median household income of about $82,300 in 2012, and median household income was even higher for those farms specializing in rice, tobacco, cotton, or peanuts, at about $101,400 in 2012. Figure 7 shows the median income for farm households and for U.S. households from 2003 through 2012, in constant 2012 dollars that reflect adjustments for inflation. Farm sector income also grew from $73.8 billion in 2003 to $113.8 billion in 2012. Net farm and net cash income for U.S. farms from 2003 through 2012 (in constant 2012 dollars that reflect adjustments for inflation) are shown in figure 8. Net farm income is the value of the agricultural goods produced by farm operators less the costs of inputs and services. Net cash income is the cash earned from the sale of these agricultural goods and the conversion of farm assets into cash. According to ERS data, however, net farm and net cash income are forecast to decrease in 2014, due principally to falling crop prices as compared with prior years. Net farm income is forecast to go up to $130.5 billion in 2013, and then decline to $95.8 billion, or by about 26.6 percent in 2014. The 2014 forecast would be the lowest since 2010, but it would still be $8 billion above the average of years 2004 to 2013. After adjusting for inflation, 2013’s net farm income would be the highest since 1973, and the 2014 net farm income forecast would be the seventh highest. Net cash income is forecast at $101.9 billion for 2014, down almost 22 percent from the 2013 forecast of $130.1. Farm real estate—a measurement of the value of all land and buildings on farms—accounted for 82 percent of the total value of U.S. farm assets in 2012. Because farm real estate comprises such a significant portion of the farm’s balance sheet, a change in the value of farm real estate is a strong indicator of the farm sector’s financial performance. U.S. farm real estate values increased by 72 percent from 2003 through 2012 due to high farm income and low interest rates, according to USDA data. Farm real estate value averaged $2,650 per acre for 2012, and the highest farm real estate values were in the Corn Belt region at $5,560 per acre. According to USDA data, this increase in national farm real estate values is forecast to continue, with an estimated average value of $2,900 per acre in 2013, up 9.4 percent from 2012 values. National farm real estate values for 2003 through 2012 (in constant 2012 dollars that reflect adjustments for inflation) are shown in figure 9. According to ERS documents, a farm’s debt-to-equity ratio and the debt- to-asset ratio are also major indicators of the financial well-being of the farm sector. The debt-to-equity ratio measures the relative proportion of funds invested by creditors (debt) and owners (equity). The debt-to-asset ratio measures the proportion of farm business assets that are financed through debt. Lower ratios signify that farmers are relying less on borrowed funds to finance their asset holdings. Farmers’ debt-to-equity ratio fell from 15.7 percent in 2003 to 12.0 percent in 2012, and their debt- to-asset ratio fell from 13.6 percent in 2003 to 10.7 percent in 2012. The farm sector’s debt-to-equity and debt-to-asset ratios are forecast to continue a pattern of decline, falling to an estimated 11.8 and 10.5 percent in 2014, respectively. According to ERS documents, these decreases would result in the lowest ratios for both measurements since 1954. The historically low levels of farm debt, relative to equity and assets, attest to the sector’s strong financial position. ERS documents state that this also means the sector is better insulated from risks such as adverse weather, changing macroeconomic conditions in the United States and abroad, or fluctuations in farm asset values that may occur due to changing demand for agricultural assets. The steady decline in both ratios since the mid-1980s is due to relatively large growth in the value of farm assets, driven principally, according to ERS documents, by the increases in farm real estate values. Figure 10 shows these farm sector debt ratios from 2003 through 2012. According to our analysis of RMA data, the federal government would have potentially saved more than $400 million in 2012 by reducing premium subsidies on federal crop insurance revenue policies by 5 percentage points, and the savings would have been nearly $2 billion with a 20 percentage point premium subsidy reduction. Premium subsidy reductions of 5 to 20 percentage points would have in turn raised farmers’ average production costs per acre from about $1.90 to about $16.90 for crops such as corn, soybeans, and cotton. As a percentage of the total production cost per acre, these increases would usually have been less than 2 percent and often less than 1 percent. Because farmers would be required to pay more for their crop insurance, reduced federal premium subsidies for revenue policies could affect the participation rate in the crop insurance program. However, the magnitude of the impact on farmers’ participation as a result of lower federal premium subsidies for revenue policies may be minimal. Reducing premium subsidies for revenue policies would potentially result in significant savings to the federal government, according to our analysis of RMA data. For example, if the premium subsidies paid in 2012 had been reduced by 5, 10, 15, or 20 percentage points that year, the potential savings for corn would have been about $197 million, $394 million, $592 million, or $789 million, respectively. Moreover, for the 10 crops—barley, canola, corn, cotton, grain sorghum, popcorn, rice, soybeans, sunflowers, and wheat—that accounted for virtually 100 percent of the premium subsidies paid for revenue policies in 2012, the potential savings with those levels of premium subsidy reductions would have been about $439 million, $878 million, $1.3 billion, and $1.8 billion, respectively. In 2000, when Congress enacted legislation to increase crop insurance premium subsidy rates, the new rates immediately became effective (i.e., upon enactment of the legislation). In contrast, according to RMA officials, when the agency increases the premiums charged for crop insurance policies based on new actuarial data, as it did in 2012, it generally phases in the increases over several years so the impact on farmers is less dramatic. Table 2 provides more information on the amount of potential savings that corresponds to the various levels of reduction in revenue policy premium subsidies, by crop. These levels of potential savings are based on the assumption that farmers would not make any changes to their policies. For example, according to this assumption, farmers would not change from a revenue policy to a less expensive yield policy or leave the crop insurance program altogether. In addition, they are based on the assumption that farmers would keep their existing coverage levels. To the extent that farmers purchased less expensive policies, left the program, or purchased lower coverage levels, the potential savings would be greater because the total amount of federal premium subsidies required would decrease. In addition, the potential savings would decline if crop prices declined. This would occur because premiums are affected by crop prices—as crop prices decrease so does the value of the crops being insured, which results in lower crop insurance premiums. Since premium subsidies are a set percentage of the premiums, these subsidy amounts would decrease as premium amounts decreased. We and other federal agencies have previously analyzed the potential savings to the federal government from reductions in premium subsidies to all or selected crop insurance policies. In our March 2012 report, based on an analysis of RMA data, we found that if the premium subsidy rates of all participating farmers in 2010 and 2011 had been reduced by 10 percentage points—from 62 percent to 52 percent—the annual cost savings for those years would have been about $759 million and $1.2 billion, respectively. The president’s 2013 budget, which included a proposal to reduce premium subsidies, asserts that deep premium subsidies are no longer needed with the current high farmer participation rates in the crop insurance program. Further, in his 2014 budget, the president included two legislative proposals to reduce the premium subsidies to farmers. One proposal was to reduce the premium subsidies by 3 percentage points for all yield and revenue policies that had premium subsidy rates above 50 percent. According to the RMA’s analysis for this proposal, the premium subsidy reduction would save the federal government about $4.2 billion over 10 years. The second proposal was to reduce premium RMA estimated subsidies for revenue policies by 2 percentage points. that this reduction in premium subsidies would save the government about $3.2 billion over a 10-year time frame. The president made a similar proposal in his 2015 budget but increased the subsidy rate reduction for revenue policies to 4 percent. RMA estimated the total expected savings over 10 years from that proposal would be $6.3 billion. However, any change in the premium subsidies would require action by Congress. Specifically, the second proposal applied to revenue policies that include the harvest price provision. Such policies accounted for about 98 percent of the premium subsidies for all revenue policies in 2012. CBO, Options for Reducing the Deficit: 2014 to 2023 (Washington, D.C.: November 2013). were implemented, CBO estimated the federal government would save $22.1 billion over a 10-year period from 2014 through 2023. Reductions to revenue policy premium subsidies of 5, 10, 15, and 20 percentage points would result in increases in farmers’ production costs as the share of the premium that they pay would increase. However, our analysis of 2012 RMA crop insurance data indicates that changes in average production costs would be limited. For example, individual corn farmers would have experienced average premium cost increases per acre for their crop insurance policies of $2.81, $5.62, $8.43, or $11.24 with premium subsidy reductions of 5, 10, 15, or 20 percentage points, respectively, in 2012. Those premium cost increases represent a limited increase in the average production costs per acre for corn farmers, usually less than 2 percent and often less than 1 percent. For example, the average production costs for corn farmers were about $656 per acre that year; with the premium cost increases, their production costs would have increased an average of 0.4 percent, 0.9 percent, 1.3 percent, and 1.7 percent with premium subsidy reductions of 5, 10, 15, or 20 percentage points, respectively. Table 3 provides information on the additional average per-acre premium costs per farmer and as a percentage of the average per-acre costs of production with premium subsidy reductions of 5 and 10 percentage points, and table 4 reflects those calculations with premium subsidy reductions of 15 and 20 percentage points. Both tables are for 2012. We note that the ultimate impact of such limited production cost increases on farmers’ income would depend on their individual profit margins. However, for the industry as a whole, the impact on farmers’ income appears to be minimal. For example, as noted in table 2, for a 5 to 20 percentage point reduction in subsidies, total farm costs in 2012 would have increased from about $0.4 billion to $1.8 billion. Further, as discussed, farm sector income in 2012 was about $114 billion. Thus, these increased costs, as a percentage of farm sector income, would have been about 0.4 to 1.6 percent. Information on the impact to farmer participation from reductions in federal crop insurance premium subsidies is limited, but the economic literature and government information that is available suggest the impact may be minimal. Farm industry groups and some researchers have stated that changes to crop insurance premium subsidies could result in reductions in farmer participation and insurance coverage levels. However, available economic literature on the impact on farmer participation due to premium subsidy reductions indicates that farmers’ response to changes in premium subsidies may be small due to factors such as their heavy reliance on crop insurance, the attractiveness of revenue policies, and the increasing importance of crop insurance as other farm programs are reduced or eliminated. Government studies of this issue have reached similar conclusions. A limited RMA analysis in support of the president’s 2014 budget proposal determined that a 5 percentage point premium subsidy reduction for yield and revenue policies would result in a limited number of farmers leaving the crop insurance program; that analysis determined that it was more likely that some farmers would purchase lower levels of policy coverage. According to RMA’s Chief Actuary, it is difficult to determine the effect of a premium subsidy change in part because of the lack of data. The task of determining the effect of a change is easier if there has been a major change in premium subsidy rates, and the impact can be assessed, this official said. A major change in premium subsidy rates creates a “natural experiment” in which to better analyze the impact to farmer participation from a change in subsidy rates. According to the Chief Actuary, this “natural experiment” last occurred with the passage of the Agricultural Risk Protection Act in 2000 that significantly raised premium subsidy rates. This in turn led to an increase in farmer participation in the crop insurance program. However, there has been no “natural experiment” to analyze how reduced premium subsidy rates impact farmer participation because, since 2000, premium subsidy rates generally have not been reduced. In the event that premium subsidy rates were reduced, actual information on the impact on farmer participation would be available. According to an RMA official, it would be a good idea to monitor the impact on farmer participation if Congress reduced premium subsidy rates. CBO, Options for Reducing the Deficit: 2014 to 2023 (Washington, D.C.: November 2013). 4 percent of their acres.farmers rely heavily on crop insurance, these results could overestimate the potential impact to farmer participation and the overall number of farmers leaving the crop insurance program could be smaller. However, the study further noted that, because Stakeholder and government officials we interviewed, as well as documents and data we reviewed, have identified several different incentives that could lessen the likelihood of significant changes in farmer participation in the crop insurance program even if premium subsidies are reduced. First, even with the premium subsidy reduction, farmers would receive substantial premium subsidies for revenue policies. For example, with a premium subsidy reduction of 20 percentage points for revenue policies, farmers would receive an average premium subsidy rate of about 40 percent for their premium cost, based on our analysis of 2012 RMA data. Second, crop insurance is important to lenders that provide loans to farmers to help finance their operations. According to lending associations that represent agriculture credit providers, crop insurance provides lenders with greater certainty that loans made to farmers will be repaid. In addition, according to an economic paper published by two ERS economists and a professor from the University of Illinois,participation in crop insurance lowers revenue risk and might allow lenders to accept loan applications with lower collateral or applications for farm operations that are more leveraged. Third, farmers may not be inclined to exit the crop insurance program since it has emerged as the main safety net for farmers. According to some farm industry stakeholders, many farmers have made crop insurance their primary risk management tool. Finally, another incentive for farmer participation may be growing concerns among farmers about the frequency and severity of adverse weather events, such as floods, droughts, heat waves, and strong storms. According to the Secretary of Agriculture, other USDA officials, and some state extension officials and academic researchers, farmers are increasingly concerned about such weather events and their impact on agricultural production, including crop losses. Federal crop insurance plays an important role in protecting farmers from losses caused by natural disasters and price declines, and it has become one of the most important programs in the farm safety net for farmers, according to USDA officials and some farm industry stakeholders. However, with increasing budgetary pressures, it is critical that federal resources are targeted as effectively as possible. With record farm income in recent years, the subsidies, including premium subsidies, provided for federal crop insurance have come under increasing scrutiny. Reductions in premium subsidies for farmers who purchase revenue policies, the most common and expensive crop insurance policy type, present an opportunity to potentially save hundreds of millions of dollars per year for taxpayers with limited increases in individual farmer’s production costs. The president has included proposals for premium subsidy reductions in his fiscal year 2013, 2014, and 2015 budgets. Such a change would require congressional action and could either be implemented immediately, as in 2000, when Congress enacted legislation to increase premium subsidy rates, or phased in, as when RMA increases the premiums charged for crop insurance policies based on new actuarial data. One point of discussion in the debate over premium subsidy reductions is the possible impact on farmer participation in the program. The crop insurance industry and some researchers suggest that even a modest premium subsidy reduction would result in some farmers lowering their coverage levels or dropping coverage altogether. However, the administration, CBO, and other researchers say that a modest reduction in premium subsidies would have little impact on program participation, and that incentives, such as the continued high level of premium subsidies, would likely keep farmers in the program. Although the impact of such a reduction is unknown, in the event that Congress reduced the crop insurance premium subsidy rates, actual information on the impact on farmer participation would be available if participation were monitored. To reduce the cost of the crop insurance program and achieve budgetary savings for deficit reduction or other purposes, Congress should consider reducing the level of federal premium subsidies for revenue crop insurance policies. In doing so, Congress should consider whether to make the full amount of this reduction in an initial year, or to phase in the full amount of this reduction over several years. In addition, Congress should consider directing the Secretary of Agriculture to monitor and report on the impact, if any, of the reduction on farmer participation in the crop insurance program. We provided the Secretary of Agriculture with a draft of this report for review and comment. In its written comments, which are reproduced in appendix II, USDA said it had no comment with the report’s findings. In addition, USDA provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees; the Secretary of Agriculture; the Director, Office of Management and Budget; and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or fennella@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Our objectives were to examine (1) trends in federal crop insurance costs and farm sector income and wealth from 2003 through 2012 and (2) the potential savings to the government and impacts to farmers, if any, of reducing federal premium subsidies for revenue insurance policies. To address these objectives, we interviewed officials of the U.S. Department of Agriculture (USDA), including officials from the Economic Research Service (ERS) and Risk Management Agency (RMA), and reviewed documents they provided such as, for example, crop insurance program costs and outlays documents. We also spoke with officials at the Congressional Budget Office (CBO). To address our first objective, we reviewed and analyzed RMA data on the government’s cost for the federal crop insurance program for the period 2003 through 2012. We are reporting federal crop insurance program costs, not outlays, because the cost numbers more accurately reflect the true costs for a given year. For example, much of the actual costs for 2012 were not determined until the following year after the claims adjustments were completed and the underwriting gains and losses determined. In contrast, outlays for 2012 do not include many of the costs actually incurred that year, but they do include many costs incurred the prior year (i.e., 2011) because of the lag time in completing the claims adjustments for that year. Finally, in reporting costs, not outlays, we are being consistent with how the program reports its costs, including in the audited financial statements of the Federal Crop Insurance Corporation. We analyzed RMA crop insurance program data including data on the level of premium subsidies for revenue policies, the top 10 states that received revenue policy premium subsidies, the insurance coverage levels chosen by farmers with revenue policies, and the crops that received the most revenue policy premium subsidies. For overall program costs, we analyzed fiscal year data presented in RMA’s cost and outlay tables. RMA data contain more detailed crop insurance information by crop year, which is what we used for our revenue policy analyses. For these analyses, we only included information on “buy-up” policies—that is, the portion of crop insurance for which a farmer pays a premium. Any coverage that is purchased above the “catastrophic” level is considered “buy-up” coverage; this type of coverage represented 99.9 percent of the revenue policy premium subsidies for the 2003 through 2012 period. In addition, we only included information for individual revenue policies and excluded group revenue policies because these latter policies made up only a small portion (less than 2 percent) of the total premium subsidies associated with revenue policies. We selected the time period of 2003 through 2012 to get a representation of the trend in program costs, usage of revenue policies, and financial condition of the farm sector. At the time of our analysis, USDA officials said that 2012 would be the most recent year with complete and stable crop insurance program data. To get an understanding of trends in farm sector income and wealth, we reviewed and analyzed ERS data and reports on the overall financial condition of the farm sector, including information on net farm and cash income, production costs, and farm debt ratios from 2003 through 2012, as well as information from USDA’s National Agricultural Statistics Service on farmland values for these years. We also reviewed and analyzed ERS information on forecasts for these elements of the farm economy, including net cash income and net farm income for 2013 and 2014. For the purposes of this report, crop insurance costs and premium subsidies, which are budget-related data, are reported in nominal dollars, while data on median farm household and U.S. household income, net farm and net cash income, and farmland values are reported in inflation adjusted dollars, using 2012 as the reference year. In addition, as appropriate, we report these data in calendar, fiscal, or crop years, depending on how the data were reported in the source documents. Unless otherwise indicated, these data are in calendar years. To address our second objective, we analyzed RMA revenue policy crop insurance program data for 2012 to estimate the savings to the federal government from reductions in premium subsidies of 5, 10, 15, and 20 percentage points. We selected these percentages because they were in line with previous reductions proposed by the president’s 2014 budget proposal, a 2013 CBO report, estimated the additional production cost per acre, on average, and by crop type, to individual farmers as a result of these premium subsidy reductions. Furthermore, we compared these additional production costs with the total cost of production, on average, and by crop, to determine the percentage increase represented by these additional production costs. We used ERS Agricultural Resource Management Survey data for and a 2012 GAO report. CBO, Options for Reducing the Deficit: 2014 to 2023 (Washington, D.C.: November 2013). 2012, where available, to determine the average production costs, per acre, for barley, canola, corn, cotton, grain sorghum, popcorn, rice, soybeans, sunflowers, and wheat. These were the crops eligible to receive revenue policy premium subsidies during the period covered by our review. We also reviewed the available agricultural economic literature, and studies by CBO, ERS, and RMA, and we spoke with officials from those agencies to determine any potential savings from reductions in crop insurance premium subsidies and the impact, if any, on farmers’ participation in the crop insurance program as a result of premium subsidy reductions. Finally, we reviewed documents from farm industry stakeholders on the crop insurance program. For the various data used in our analyses, as discussed, we generally reviewed related documentation, interviewed knowledgeable officials, and reviewed related internal controls information to evaluate the reliability of these data. In each case, we concluded that the data were sufficiently reliable for the purposes of this report. We conducted this performance audit from May 2013 to August 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our objectives. In addition to the individual named above, James R. Jones, Jr., Assistant Director; Kevin S. Bray; Michael Kendix; David Moreno; Sophia Payind; Kelly Rubin; and Jerry Sandau made key contributions to this report. In addition, Cheryl Arvidson, Gary T. Brown, and Thomas M. Cook made important contributions to this report. Extreme Weather Events: Limiting Federal Fiscal Exposure and Increasing the Nation’s Resilience. GAO-14-364T. Washington, D.C.: February 12, 2014. Fiscal Exposures: Improving Cost Recognition in the Federal Budget. GAO-14-28. Washington, D.C.: October 29, 2013. 2013 Annual Report: Actions Needed to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-13-279SP. Washington, D.C.: April 9, 2013. High-Risk Series: An Update. GAO-13-283. Washington, D.C.: February 14, 2013. Crop Insurance: Savings Would Result from Program Changes and Greater Use of Data Mining. GAO-12-256. Washington, D.C.: March 13, 2012. Crop Insurance: Opportunities Exist to Reduce the Costs of Administering the Program. GAO-09-445. Washington, D.C.: April 29, 2009. Crop Insurance: Continuing Efforts Are Needed to Improve Program Integrity and Ensure Program Costs Are Reasonable. GAO-07-944T. Washington, D.C.: June 7, 2007. Crop Insurance: Continuing Efforts Are Needed to Improve Program Integrity and Ensure Program Costs Are Reasonable. GAO-07-819T. Washington, D.C.: May 3, 2007. Climate Change: Financial Risks to Federal and Private Insurers in Coming Decades Are Potentially Significant. GAO-07-760T. Washington, D.C.: April 19, 2007. Climate Change: Financial Risks to Federal and Private Insurers in Coming Decades Are Potentially Significant. GAO-07-285. Washington, D.C.: March 16, 2007. Suggested Areas for Oversight for the 110th Congress. GAO-07-235R. Washington, D.C.: November 17, 2006.
Federally subsidized crop insurance, which farmers can buy to help manage the risk inherent in farming, has become one of the most important programs in the farm safety net. Revenue policies, which protect farmers against crop revenue loss from declines in production or price, are the most popular policy type and account for nearly 80 percent of all premium subsidies. The crop insurance program's cost has come under scrutiny while the nation's budgetary pressures have been increasing. GAO was asked to look at the cost of the crop insurance program. This report examines (1) trends in federal crop insurance costs and farm sector income and wealth from 2003 through 2012 and (2) the potential savings to the government and impacts on farmers, if any, of reducing federal premium subsidies for revenue policies. GAO analyzed USDA crop insurance program data and farm sector income and wealth data from 2003 through 2012 (most recent year with complete crop insurance data); reviewed economic literature and documents from stakeholders including farm industry groups and researchers; and interviewed USDA officials. The cost of the federal crop insurance program and farm sector income and wealth grew significantly from 2003 through 2012. The cost of crop insurance averaged $3.4 billion a year from fiscal years 2003 through 2007, but it increased to $8.4 billion a year for fiscal years 2008 through 2012. According to the U.S. Department of Agriculture's (USDA) Risk Management Agency (RMA), the agency that administers the crop insurance program, subsidies for crop insurance premiums accounted for $42.1 billion─or about 72 percent─of the $58.7 billion total program costs from 2003 through 2012. Revenue policies, the most frequently purchased crop insurance option, accounted for $30.9 billion of the total premium subsidy costs for 2003 through 2012. Crop insurance premium subsidy rates—the percentage of premiums paid by the government—are set by Congress and would require congressional action to be changed. For most policies, the rates range from 38 to 80 percent, depending on the policy type, coverage level chosen, and geographic diversity of crops insured. As premium subsidy costs increased, farm sector income and wealth indicators also increased. For example, for each year from 2003 through 2012, median farm household income exceeded median U.S. household income. Specifically, on average, median farm household income was $7,205, or 13.8 percent, greater each year than U.S. household income, in constant 2012 dollars. Farm sector income also grew from $73.8 billion in 2003 to $113.8 billion in 2012, in constant 2012 dollars. Farm real estate values, another measure of farm prosperity, increased by 72 percent from 2003 through 2012, in constant 2012 dollars, and farmers relied less on borrowed funds to finance their holdings. Reducing premium subsidies for revenue policies could potentially result in hundreds of millions of dollars in annual budgetary savings with limited costs to individual farmers. For example, the federal government would have potentially saved more than $400 million in 2012 by reducing premium subsidies by 5 percentage points, and the savings would have been nearly $2 billion by reducing these subsidies by 20 percentage points. Although such reductions would have required farmers to pay more of their premiums, the impact on their average production costs per acre would have been limited, usually less than 2 percent, and often less than 1 percent. For example, for corn, premium subsidy reductions of 5 and 20 percentage points in 2012 would have raised average production costs per acre by about $2.80 and $11.20, respectively. These increases would have been about 0.4 percent and 1.7 percent, respectively, of the total average production cost per acre of $656 that year for corn. The ultimate impact of such limited production cost increases on farmers' income would depend on their individual profit margins. However, for the industry as a whole, the impact appears to be minimal. In 2000, when Congress enacted new premium subsidy rates, the new rates immediately became effective. In contrast, when RMA increases the premiums charged for policies, it generally phases in the increases over several years to lessen the impact on farmers. Documents from farm industry groups and some researchers note that reductions in premium subsidies could result in lower farmer participation in the program and lower insurance coverage levels. However, available economic literature indicates that farmers' response to such reductions may be small due to factors such as the attractiveness of revenue policies and increasing importance of crop insurance as other farm programs are reduced or eliminated. In addition, other stakeholders identified incentives that would help keep farmers in the program, including pressure from lenders to maintain crop insurance coverage and the importance of crop insurance to many farmers as their primary risk management tool. In the event that subsidy rates were reduced, actual information on the impact on farmer participation would be available if participation were monitored. To reduce the cost of the crop insurance program, Congress should consider reducing the level of federal premium subsidies for revenue crop insurance policies, including a phased reduction, if appropriate, and directing USDA to monitor and report on the impact, if any, of this reduction on crop insurance program participation. In written comments, USDA said it had no comments on the report's findings.
The Department of Agriculture’s Forest Service manages about 192 million acres of land—nearly 9 percent of the nation’s total surface area and 30 percent of all federal lands. Laws guiding the management of the 155 forests, 20 national grasslands, and 17 national recreation areas within the National Forest System require the agency to manage its lands to provide high levels of six renewable surface uses—outdoor recreation, rangeland, timber, watersheds and waterflows, wilderness, and wildlife and fish—to current users while sustaining undiminished the lands’ ability to produce these uses for future generations. In addition, the Forest Service’s guidance and regulations require the agency to consider the production of nonrenewable subsurface resources, such as oil, gas, and hardrock minerals, in its planning. To carry out the Forest Service’s mission, each year the President’s budget proposes and the Congress appropriates funds to, among other things, (1) manage the National Forest System, (2) conduct or sponsor forest and rangeland research, and (3) enhance the health and sustainable management of the nation’s state and private forests. In committee reports, the House and Senate Committees on Appropriations allocate funds to one or more line items within each of these appropriations. The agency then allocates these funds to its headquarters (Washington Office) and field offices. In the mid-1990s, the Forest Service asked the House and Senate Appropriations Committees to restructure its budget to increase the agency’s flexibility to carry out its mission and to improve its ability to use funds where they are most needed. The Committees incorporated many of these requested budget reforms in approving the Forest Service’s fiscal year 1995 appropriations. The Forest Service, created in 1905, is a hierarchical organization whose management is highly decentralized and whose managers have considerable autonomy and discretion for interpreting and applying the agency’s policies and directions. The Chief of the Forest Service heads the agency and, through Agriculture’s Under Secretary for Natural Resources and Environment, reports to the Secretary of Agriculture. In April 1998, the Chief of the Forest Service restructured the agency’s management team to facilitate needed efficiencies regarding the Forest Service’s accountability and business practices. As a result of the restructuring, a Chief Operating Officer is responsible for fiscal and business management and an Associate Chief for Natural Resources has direct oversight for natural resources programs. Both report directly to the Chief of the Forest Service. Among other things, the Forest Service’s Washington Office establishes policy and provides technical direction to the National Forest System’s three levels of field management: 9 regional offices, 123 forest offices, and about 600 district offices. At the Washington Office, the National Forest System has separate program directors for nine programs: Engineering; Lands; Recreation, Heritage, and Wilderness Resources; Minerals and Geology; Range Management; Forest Management; Watershed and Air Management; Wildlife, Fish, and Rare Plants; and Ecosystem Management. Similar lines of program management exist at the regional, forest, and district office levels. However, because of budgetary constraints, the management of some of these programs may be combined. The Forest Service starts to develop a budget for a given fiscal year about 2 years before the fiscal year begins. The agency constructs a budget for the National Forest System and other appropriations—including forest and rangeland research and state and private forestry—that indicates how funds will be allocated among line items. The agency submits its proposed budget to the Department of Agriculture for review and any changes about 15 months before the fiscal year begins. Agriculture, in turn, submits the Forest Service’s budget to the Office of Management and Budget for review and any changes about a year before the beginning of the fiscal year in which the funds will be spent. The President’s budget is submitted to the Congress no later than the first Monday in February for the fiscal year beginning the coming October 1st. Shortly afterwards, the Forest Service submits its explanatory notes to the House and Senate Committees on Appropriations. Once the Committees review, amend, and approve the agency’s budget, the Congress appropriates funds for the National Forest System and for other Forest Service appropriations as part of the appropriations act for the Department of the Interior and related agencies. The Committees’ reports or the appropriations act may also specify restrictions on certain types of spending and may earmark funds for special activities or projects. Once funds are received by the Forest Service, the agency removes funds needed to operate the Washington Office and specifies funding that will be used for national commitments and for special projects. The Washington Office then allocates the remaining funds by line item to its regional offices. The appropriation for the National Forest System includes 21 budget and extended budget line items that are generally used to fund the system’s nine programs. A National Forest System program may be funded from one or more line items under the appropriation for the National Forest System. When a program, such as Minerals and Geology, is funded from only one line item—in this instance, Minerals and Geology Management—the line item is referred to as a “budget line item.” Other programs are funded from two or more line items. For example, the Forest Management program is funded from the Timber Sales Management and the Forestland Vegetation Management line items. These line items are referred to as “extended budget line items” and are aggregated into a budget line item for Forestland Management. (See table 1.1.) Funds are usually allocated to the agency’s nine regional offices on the basis of budget allocation criteria developed by the Forest Service. For example, the criteria for allocating funds from the Wildlife Habitat Management extended budget line item to each region include, among other things, the number of acres, the opportunities for habitat restoration and enhancement, and the number of big game species. Regions then distribute the funds by line item to the 155 national forests on the basis of regional budget allocation criteria or on a program-by-program assessment of needs. Finally, each national forest office allocates funds to its districts by line item and by the type of activity that will be performed. For example, a national forest office might allocate some funds within the Grazing Management extended budget line item to a district to be used specifically to construct improvements for livestock grazing. District personnel not only receive funding slated for specific activities within each line item, they also track and charge their work accordingly. In fiscal year 1992, the Forest Service was cited by the National Performance Review—a White House-led study of ways to improve the efficiency of federal programs—as an example of an agency whose budget structure impeded the productive management and the efficient use of taxpayer dollars. As the Forest Service moved from managing individual resources, such as wildlife, recreation, timber, rangeland, and water, to a more broad-scale, more comprehensive approach to land management (ecosystem management), the agency proposed significant changes in its budget structure for fiscal year 1995 to help implement this management approach and improve efficiency. In acting on the Forest Service appropriations for fiscal year 1995, the House and Senate Appropriations Committees (1) consolidated line items in the agency’s budget for which specific amounts of funds are allocated, (2) expanded the agency’s authority to reprogram funds without requesting the Committees’ approval, and (3) restructured the agency’s budget so that all funding to carry out a project—including the funding for services provided by others—is consolidated in the program that will benefit most from the project. In return for this increased budget flexibility, the Appropriations Committees expected the Forest Service to improve its performance measures and accountability for expenditures and results. The Chairman and Ranking Minority Member of the Subcommittee on Interior and Related Agencies, House Committee on Appropriations, asked us to review the Forest Service’s implementation of the fiscal year 1995 budget reforms. As agreed with their offices, this report discusses (1) the Forest Service’s implementation of the fiscal year 1995 reforms and (2) the progress that the agency has made toward becoming more accountable for its results. Our review was limited primarily to funds appropriated to manage the National Forest System. The appropriation for the National Forest System represented about $1.3 billion or about 47 percent of the Forest Service’s discretionary appropriations and about 37 percent of its total appropriations for fiscal year 1997. We conducted our work at the Forest Service’s Washington Office; three of the agency’s nine regional offices—the Pacific Northwest (Region 6), the Southern (Region 8), and the Eastern (Region 9); and five national forests—the Deschutes and Willamette (in Oregon and Region 6), the Daniel Boone (in Kentucky and Region 8), and the Superior and Chippewa (in Minnesota and Region 9). To obtain information on both objectives, we interviewed Forest Service budget officials, field managers, and officials responsible for managing various programs within the National Forest System at all of the agency’s locations that we visited. We also obtained and reviewed relevant reports, records, correspondence, budget data, budget allocation criteria, and data on performance indicators at these offices. In addition, to obtain information on the Forest Service’s implementation of the budget reforms, we reviewed applicable laws and legislative histories relating to the reforms, the agency’s directives and guidance in implementing the reforms, and the agency’s budget justification explanatory notes. To obtain information on the Forest Service’s progress toward becoming more accountable for its results, we reviewed relevant reports by the Department of Agriculture’s Office of Inspector General, relevant studies by a consulting firm and an environmental group, and prior GAO reports and testimonies. We performed our work from September 1997 through September 1998 in accordance with generally accepted government auditing standards. In conducting our work, we did not independently verify the reliability of the financial data provided by the Forest Service nor did we trace the data to the systems from which they came. These systems were, in some cases, subject to audit procedures by the Department of Agriculture’s Office of Inspector General in connection with the agency’s financial statement audits. For fiscal years 1995, 1996, and 1997 and previous years, Agriculture’s Office of Inspector General reported that because of significant internal control weaknesses in various accounting subsystems, the Forest Service’s accounting data were not reliable. Despite these weaknesses, we used the data because they were the only data available and are the data that the agency uses to manage its programs. We obtained comments on a draft of this report from the Forest Service. The agency’s comments and our evaluation are presented in appendix I. The Forest Service’s management of the National Forest System has not appreciably changed as a result of the increased flexibility offered by the fiscal year 1995 budget reforms. Specifically, although consolidating the budget line items and extended budget line items was intended to provide field managers with larger pools of funds and, thus, greater discretion in deciding where to spend the funds, some forest and district offices have continued to distribute and track funds as if the consolidation had not occurred. In addition, the budget is still structured primarily by individual resource-specific programs, such as timber sales and wildlife habitat management, although the agency’s strategic goals and objectives increasingly require that these and other programs be integrated to achieve broader stewardship objectives, such as restoring or protecting forested ecosystems. The fiscal year 1995 budget reforms also expanded the Forest Service’s ability to move funds between line items without the Appropriations Committees’ approval. However, the agency has seldom requested such approval either before or after the reforms. On the basis of information provided by the Forest Service, the agency submitted one or two requests a year for the Appropriations Committees’ approval to reprogram funds among line items for the National Forest System in fiscal years 1994 through 1997. Thus, the reforms have not had a noticeable impact on the number of reprogrammings requested from the Appropriations Committees. Finally, the fiscal year 1995 budget reforms restructured the Forest Service’s budget so that all the funding for a project is consolidated in the program that will benefit most from that project. However, for a variety of reasons, including underestimating a project’s costs, a benefiting program may not have the funds needed to implement a project. In these instances, it may require other programs that are providing support services to absorb the costs of the services instead of seeking to meet its needs by moving funds between line items or requesting that funds be reprogrammed. This practice circumvents the requirements established by the Appropriations Committees and the agency to move funds between line items and understates a project’s costs. It also precludes the Forest Service from providing the Congress and other interested parties with meaningful, useful, and reliable information on the costs and the accomplishments of the National Forest System’s programs. The fiscal year 1995 budget reforms reduced the number of budget line items and extended budget line items in the Forest Service’s budget from 72 to 48, primarily by combining many of them. Within the appropriation for the National Forest System, the budget line items and extended budget line items were reduced from 37 to 28. Line items for which a specific amount of funds are allocated to support the National Forest System were reduced from 31 to 21, or by almost one-third. (See table 2.1.) The intent of this consolidation was to simplify the management of funds. By combining funds into larger pools, field and program managers would have increased flexibility and greater discretion in deciding where to spend the funds. The Forest Service officials we interviewed generally support consolidating line items in the budget and, in many instances, favor additional consolidation. However, some forest and district offices continue to distribute and track funds by line items that were combined under the fiscal year 1995 reforms, thus counteracting the increased flexibility and discretion provided by the consolidation. In addition, there is no clear link between the Forest Service’s integrated-resource approach to natural resources management, which emphasizes maintaining and restoring the health of forested, rangeland, and aquatic ecosystems, and the resource-specific line items in the agency’s budget. Since the fiscal year 1995 budget reforms, funds have generally been budgeted and allocated at the Forest Service’s Washington and regional offices consistent with the consolidated budget line items and extended budget line items. However, the implementation of the reforms at the forest and district offices has been left to the discretion of field and program managers. As a result, some forest and district offices continue to distribute and track funds as if the consolidation had not occurred, thus undermining the full potential of the budget reforms to simplify the management of funds. For example, in acting on the fiscal year 1995 appropriations for the Forest Service, the House and Senate Appropriations Committees reiterated to the agency the importance of clearly presenting in its budget justification the same level of detailed information that had been provided under the old budget structure. According to the Forest Service, for most programs, this meant keeping the same, or possibly expanding, the number of activities used to track expenditures. For example, to account for funds allocated to the National Forest System’s 21 line items, the Pacific Northwest office (Region 6) tracks funds for as many as 217 different activities, including many for line items that were eliminated through consolidation. To illustrate, although the fiscal year 1995 budget reforms reduced the number of timber-related extended budget line items in the National Forest System’s appropriation from seven to two, that office still tracks funds for as many as 14 different activities under the two line items, including several activities for the line items that had been eliminated. (See fig. 2.1.) Timber Sale Planning for Current and Future Sales Timber Sale Preparation for Current and Future Sales Timber Sale Planning for Future Sales Only Timber Sale Preparation for Future Sales Only Timber Harvest Administration for Forest Stewardship Timber Harvest Administration for Personal Use Timber Harvest Administration for Timber Program Appeals and Litigation - Program (Costs of processing appeals, litigation, and contract claims) Appeals and Litigation - Sales (Costs of reworking sale plans that are changed as a result of appeals and litigation) (Figure notes on next page) According to the official at the Pacific Northwest office who is responsible for natural resources budget and finance, in addition to tracking expenditures by activity, some forest and district offices have also chosen to distribute funds by activity rather than by line item. Although field and program managers have some flexibility to move funds among activities, district office staff tend to see the distributions as rigid limits to planning and accounting for work. For example, instead of moving funds among activities during a fiscal year, field and program staff may redistribute work charged to activities after the fact to achieve or maintain specific levels of funding within activities (called “retroactive redistribution”). In 1998, this practice was noted in a review of the Forest Service’s financial systems by a private consulting firm as well as a report by the Department of Agriculture’s Office of Inspector General. The Forest Service is an agency in transition. Over the past decade, the agency has shifted its emphasis from consumption (primarily producing timber) to conservation (primarily sustaining wildlife and fish) and has moved from managing specific resources to a broader, more comprehensive ecosystem-based approach to land management. However, notwithstanding the fiscal year 1995 budget reforms, the Forest Service’s budget structure has not kept pace with the agency’s transformation and, as a result, there is no clear link between the agency’s ecosystem-based strategic goals and objectives and the National Forest System’s resource-specific line items. As the Forest Service has made clear in several documents during the past year, its overriding mission and funding priority, consistent with its existing legislative framework, is to maintain or restore the health of the lands entrusted to its care. These documents include the agency’s September 30, 1997, strategic plan prepared under the Government Performance and Results Act of 1993 (the Results Act), its fiscal year 1999 budget explanatory notes, its first annual performance plan developed under the Results Act, and the Chief’s March 1998 natural resource agenda for the twenty-first century. The agency intends to limit goods and services on national forests—including recreational experiences, commercial sawtimber and other forest products, and livestock and wildlife forage—to the types, levels, and mixes that the lands are capable of sustaining. The documents also make clear that the agency intends to fulfill this responsibility primarily by using an ecosystem-based approach to land management that emphasizes integrating resource-specific programs and activities to maintain and restore the health of forested, aquatic, and rangeland ecosystems. The fiscal year 1995 budget reforms created a new line item—ecosystem planning, inventory, and monitoring—to allow the Forest Service to plan more along the boundaries of natural systems. However, the Forest Service’s budget structure remains highly fragmented along the lines of individual resource-specific programs and activities, such as managing timber sales, livestock grazing, wildlife habitat, and wildfires. This fragmentation works against an integrated approach to land management. For example, the Forest Service’s fiscal year 1997 annual report cites the goal of restoring and protecting forested ecosystems as the agency’s highest priority. However, rather than having one large pool of funds available to achieve this goal and greater discretion to spend funds, a forest or district office may have to use up to 24 different funding sources to implement a plan to restore or protect a forested ecosystem. These funding sources include four National Forest System line items over which the forest and district offices have the most control. Of the remaining 20 funding sources, 7 are from the state and private forestry appropriation, 2 are from the wildland fire management appropriation, 1 is from the land acquisition appropriation, and the other 10 are from various permanent appropriations and trust funds for such activities as brush removal, timber salvage sales, and reforestation. According to some Forest Service officials we talked to, this fragmented approach can result in inefficiently implementing an ecosystem-based management plan. Since fiscal year 1995, the House and Senate Committees on Appropriations have expanded the Forest Service’s ability to move funds between line items without the Committees’ approval and then taken some of this increased flexibility away. Similarly, the Forest Service has loosened, then tightened, its requirements relating to obtaining the approval of the Chief before a region can move funds among extended budget line items. However, increasing or reducing the funding threshold for obtaining the Committees’ approval to reprogram funds and loosening or tightening the agency’s requirements seem to have little effect on the number of reprogrammings that the Forest Service requests from the Appropriations Committees. Before fiscal year 1995, the Forest Service was required to obtain the Appropriations Committees’ approval to reprogram more than $250,000, or 10 percent of the funds, whichever was less on an annual basis, between budget line items and extended budget line items. As part of the fiscal year 1995 budget reforms, the House and Senate Appropriations Committees expanded the agency’s reprogramming authority by allowing it to move, without requesting the Committees’ approval, (1) up to $3 million between budget line items, or 10 percent of the funds for a budget line item, whichever was less on an annual basis, and (2) funds among the extended budget line items within a budget line item. The agency, in turn, delegated the authority to move funds among the extended budget line items within a budget line item to its regional offices. However, concerned that the reforms had provided the Forest Service with too much latitude to make changes without sufficiently involving the Congress, the Appropriations Committees reduced the agency’s reprogramming authority in fiscal year 1998 by requiring the Forest Service to obtain their approval to reprogram more than $500,000, or 10 percent of the funds, whichever was less on an annual basis, between both budget line items and extended budget line items. The Forest Service, in turn, tightened its reprogramming guidelines to require its regional offices to obtain the approval of the Chief before reprogramming funds between the National Forest System’s budget line items or among its extended budget line items, up to $500,000. The Forest Service’s district, forest, and regional offices have always been able to move funds between line items, regardless of the funding threshold for obtaining the Appropriations Committees’ approval through a process called “brokering.” Under this process, the Forest Service tries to meet its needs by moving funds between line items at the lowest possible organizational level without ever exceeding the amounts allocated in the Committees’ reports. Districts within a national forest advise their forest office of any need to move funds from one line item to another. The forest office then offsets or brokers the requests of one district against the requests of other districts within the forest, thus keeping the total funds for each line item within the amount allocated to that forest office and avoiding the need to request a reprogramming. Requests that cannot be brokered at the level of the forest office are submitted to the regional office, which offsets the requests of one forest office against the requests from others within the region, thus keeping the total funds for each line item within the amount allocated to the regional office and again avoiding the need to request a reprogramming. Finally, requests that cannot be brokered at the regional level are submitted to the Washington Office, which offsets the requests of one region against the requests of other regions while still keeping the total funds for each line item within the amount allocated to the agency and avoiding the need to request a reprogramming. For example, during a fiscal year, one district office may need more funds for wildlife habitat management and less funds for recreation management than it was allocated while another district office within the same forest may need more funds for recreation management and less funds for wildlife habitat management. Under the Forest Service’s brokering process, the districts would simply trade or offset funds allocated for wildlife habitat management for funds allocated for recreation management. Trades that cannot be made at the level of the forest office are elevated to the regional level and ultimately to the Washington Office. Because the total funds for both line items remain within the amounts allocated to the agency, reprogramming is not required. Needs that cannot be met by brokering must be met by reprogramming. Although the Forest Service could not document the benefits resulting from expanding the agency’s authority to reprogram funds without the Appropriations Committees’ approval, increasing the funding threshold and allowing regional offices more flexibility to move funds to meet changed conditions may reduce the administrative burden at the Washington Office and at other organizational levels within the agency. Conversely, reducing the funding threshold and the regional offices’ flexibility to move funds may increase the administrative burden at these organizational levels. However, neither increasing or reducing the funding threshold nor loosening or tightening the regional offices’ flexibility to move funds seems to affect the number of reprogrammings that the Forest Service requests from the Committees. According to Forest Service officials, the agency rarely seeks reprogramming approval because it is the agency’s responsibility to stay as close as possible to the amounts allocated in the Committees’ reports. In addition, the process to request and obtain the Committees’ approval to reprogram funds can take several months. The process of determining reprogramming needs generally begins during a midyear review at which regional needs that cannot be met by brokering are identified. The agency then attempts to meet those needs that cannot be offset dollar-for-dollar by reprogramming funds within its authority to do so. Only if it cannot meet its reprogramming needs within its funding threshold will the Forest Service request the Appropriations Committees’ approval to reprogram funds, and only after the request has been (1) routed to several offices within the Department of Agriculture for sequential review and approval, (2) subsequently submitted to the Office of Management and Budget for its review and approval, and (3) forwarded to the Secretary of Agriculture for submittal to the Committees. As a result, the Forest Service has seldom requested such approval either before or after the fiscal year 1995 budget reforms. On the basis of information provided by the Forest Service, the agency submitted one or two requests a year for the Appropriations Committees’ approval to reprogram funds among line items for the National Forest System in fiscal years 1994 through 1997. The amounts totaled about $29.1 million in fiscal year 1994, $35.5 million in fiscal year 1995, $9.5 million in fiscal year 1996, and $13.7 million in fiscal year 1997. The fiscal year 1995 budget reforms restructured the Forest Service’s budget so that all funding to carry out a project is consolidated in the program that will benefit the most from that project. Under this concept—called “benefiting function”—a program, such as Forest Management, that requires support services from other programs, including the Wildlife, Fish, and Rare Plants, to assist in conducting environmental analyses and preparing environmental documents relating to a planned timber sale, should pay the costs of those services, rather than the programs that provide the support. However, programs that underestimate the costs of a project or otherwise do not have the funds needed to pay for a project’s support services may require other programs that are providing support services to absorb the costs of the services. Agency officials informed us that “charging as budgeted” and not “as worked” was sometimes a more acceptable option than either not doing the project or requesting a time-consuming and possibly uncertain brokering or reprogramming of funds. However, this practice not only circumvents the requirements established by the Appropriations Committees and the agency to move funds between line items and understates a project’s cost, it also precludes the Forest Service from providing the Congress and other interested parties with meaningful, useful, and reliable information on the costs and the accomplishments of the National Forest System’s programs. Although quantifying the extent to which staff providing support services do not charge their work to the benefiting program is not possible without firsthand knowledge of each project, the practice of a benefiting program requiring other programs to absorb the costs of providing support services appears to be widespread throughout the Forest Service. For example, at our request, an official in the Lands program at the Washington Office conducted an internal survey of the National Forest System in July 1998. On the basis of that systemwide survey, he estimated that since fiscal year 1995, on average, about 49 percent of the funds allocated to the Lands program to survey, locate, mark and post, and maintain previously marked property lines between lands in the National Forest System and lands in other ownership (landline location) have been used to support other programs, but charged to the Lands program. Funds allocated to the Lands program for landline location activities average about $14 million a year. Other studies have reached similar conclusions. For instance, a March 1998 report by a private consulting firm that examined the Forest Service’s financial systems states that “the capability to work around ’charged as worked’ initiatives is the most serious criticism of the agency’s current accounting and budget infrastructure. This capability is often cited as the primary reason for the Forest Service’s lack of financial credibility.” Moreover, “charging as budgeted” and not “as worked” appears to be occurring at all three levels of National Forest System field management. For example, a March 1998 report by the Department of Agriculture’s Office of Inspector General states that 8 out of 10 biological evaluations conducted in one district office were paid for by the Wildlife, Fish, and Rare Plants program instead of the benefiting program (e.g., timber and recreation). Similarly, Wildlife, Fish, and Rare Plants program officials in the Pacific Northwest (Region 6) office observed that the recreation, minerals, and range programs were not providing adequate funds for biological support services to prepare environmental documents, so funds were being taken inappropriately from the wildlife program. And, a briefing paper prepared in 1996 by the Washington Office’s director of the Wildlife, Fish, and Rare Plants program noted that no attempt had been made to fund salary costs within the Washington Office consistent with the concept of benefiting function. According to several Forest Service officials we spoke to and agency documents that we reviewed, in some instances, staff from programs providing support services may not always charge their costs to the benefiting program because the program primarily benefiting from the work has not been clearly identified, defined, or understood. For example, work mischarged to fisheries activities in the Wildlife, Fish, and Rare Plants program in the Eastern Region (Region 9) dropped from an average of 60 percent to 12 percent after the region circulated guidance on identifying the benefiting program and the importance of charging work to it. Other regional offices and forest offices have issued similar guidance to clarify specific benefiting programs and activities. Some of the confusion in identifying the benefiting program may be because the Forest Service’s budget structure has not kept pace with the agency’s movement away from goals and objectives that clearly benefit one resource-specific program toward using multiple programs to accomplish broader ecosystem-based goals and objectives. Timber as a commodity program versus timber as a tool to achieve a stewardship objective, such as maintaining or restoring a forested ecosystem, is an example. The agency’s Forest Management Program Report for fiscal year 1997 notes that the proportion of total harvest volume removed solely for timber commodity purposes had fallen from 71 percent in fiscal year 1993 to 52 percent in fiscal year 1997. During that time, the proportion removed for forest stewardship purposes—mostly to accomplish a forest ecosystem health-related objective—had grown from 23 to 40 percent. This trend is expected to continue and, by fiscal year 1999, the Forest Service estimates that the proportion of total harvest volume removed solely for timber commodity purposes will have fallen to 46 percent while the proportion removed for forest stewardship purposes will have grown to 54 percent. Although timber sales will increasingly be used as a tool to maintain or restore forested ecosystems, timber sales management under the Forest Management program is still identified as the benefiting function. We found instances where field and program managers justified charging the programs providing support services because many programs benefited from the project, so they decided that the program with available funding should pay. For example, a snowmobile club using the Superior National Forest in Minnesota asked permission from the Forest Service to build a snowmobile trail. However, because of the requirements of the Wilderness Act, the trail could not be constructed on lands designated as wilderness. To locate the boundary of the wilderness, the Lands program was required to survey the area of the forest where the trail was to be built and mark and post the boundaries of the wilderness. Officials at the Forest Service’s Washington Office agree that the Recreation, Heritage, and Wilderness Resources program was the benefiting program because it is responsible for both recreation and wilderness management. However, the costs to survey, locate, and mark and post the boundaries of the wilderness area were absorbed by the Lands program. The forest office’s budget and finance officer justified charging the costs to the Lands program by pointing out that the boundaries of the wilderness area would eventually have to be established anyway. However, the manager of the Lands program in the forest office stated that locating the boundaries of wilderness to build a snowmobile trail was a relatively low priority within that program. Moreover, the funds allocated to that program were needed to meet the Forest Service’s priority of reducing the risks, such as timber theft, soil and water degradation, and encroachments and trespass, to the National Forest System’s resources that are caused by the rapid population growth along the boundaries of the national forests—an area termed the “wildland/urban interface.” In an April 1997 report, we stated that the Forest Service had not given adequate attention to improving its accountability for expenditures and performance and that improvements are often left to the discretion of regional offices and forests with uneven or mixed results. The failure of certain regions, forests, and districts to consistently charge the costs of support services to the benefiting programs is another example of an organizational culture of indifference toward accountability. In the April 1997 report, as well as in March 1998 testimony, we observed that strong leadership within the Forest Service would be required to ensure corrective action. In exchange for the greater flexibility granted to the Forest Service by the fiscal year 1995 budget reforms, the Appropriations Committees expected the agency to, among other things, improve its performance measures and increase its accountability for results. Actions to be taken by the Forest Service included improving its existing performance measure system and implementing a management cost and performance reporting system that it was developing. In addition, the Forest Service has developed agencywide criteria to allocate appropriated funds to its regions and forests. However, (1) the Forest Service’s budget allocation criteria are often not linked to the agency’s strategic goals and objectives; (2) its performance measures do not, in many instances, adequately reflect its accomplishments or progress toward achieving its goals and objectives; and (3) the management cost and performance reporting system that the agency was, and is still, developing uses the performance measures as input. As a result, the Forest Service, the Congress, and other interested parties do not have an adequate measure of the agency’s funding needs or its progress toward achieving its goals and objectives. Since fiscal year 1996, the Forest Service has used criteria developed at the Washington Office to allocate funds by extended budget line items to its field offices. However, these allocation criteria often are not linked to the agency’s strategic goals and objectives. For instance, the Forest Service’s fiscal year 1997 annual report cites the goal of restoring and protecting forested ecosystems as the agency’s highest priority. Similarly, the agency’s September 30, 1997, 5-year strategic plan makes clear that, consistent with its existing legislative framework, the Forest Service’s overriding mission and funding priority is to maintain or restore the health of the lands entrusted to its care and that it intends to fulfill this responsibility primarily by maintaining and restoring the health of forested, aquatic, and rangeland ecosystems. The agency’s July 1998 Forest Management Program Report for fiscal year 1997 continues this theme, noting that the proportion of total harvest volume removed to accomplish forest ecosystem health-related objectives and other forest stewardship purposes had grown to 40 percent and that by fiscal year 1999 this proportion is expected to increase to 54 percent. However, the criteria that the Forest Service used to allocate fiscal year 1998 funds to its field offices to manage timber sales were based solely on managing timber as a commodity rather than on using it as a tool to accomplish a stewardship objective. In its first annual performance plan developed under the Government Performance and Results Act of 1993 (the Results Act), dated February 4, 1998, the Forest Service identified three extended budget line items within the National Forest System appropriation that are available to forest and district offices to restore or protect forested ecosystems, including one for Timber Sales Management. However, all three of the budget allocation criteria for this funding source relate to providing a continuous supply of timber from the national forests, not to restoring or protecting the forested ecosystems. While the agency’s Forest Management Program Report for fiscal year 1997 stresses the fact that the timber being removed from the national forests today includes proportionately more (1) dead and dying trees, as opposed to green timber, and (2) nonsawtimber, as opposed to sawtimber, the criteria for allocating funds appropriated for Timber Sales Management for fiscal year 1998 relate solely to the volume of green timber produced or offered. (See table 3.1.) Soon after the fiscal year 1995 budget reforms were enacted, the Forest Service sent a memorandum to its managers outlining the reforms and how it intended to fulfill its commitment to the Congress to improve accountability. Among other things, the Forest Service planned to develop new measures of performance and improve existing indicators in the primary system it has been using to measure performance—the Management Attainment Report or MAR report. The indicators in the MAR report are intended to measure how well the Forest Service’s field offices are, and the agency as a whole is, performing. However, with few exceptions, the agency officials we interviewed considered the MAR report to be, at best, an imperfect measure of the agency’s performance and, at worst, misleading. In total, the MAR report has more than 100 indicators for the National Forest System’s nine programs. These indicators include the number of forest plan revisions completed or underway, the number of miles of wilderness trails, the number of heritage sites evaluated or protected, and the number of acres of noxious weeds treated. The indicators are intended to measure how well field offices are performing. Information from the MAR report is also used to report the Forest Service’s performance to the Congress and the public. Prior to the beginning of a fiscal year, Forest Service program managers in the Washington Office negotiate performance targets for a handful of MAR indicators for their individual programs. These targets are then allocated by program to the regional, forest, and district offices. At the end of the fiscal year, program staff in the district offices report their accomplishments by indicator to their forest office. The forest offices combine the districts’ accomplishments and forward them to their regional office, which in turn combines the forests’ accomplishments and forwards them to the Washington Office where they are combined and reported to the Congress and the public. The MAR indicators often do not adequately reflect the Forest Service’s progress toward achieving its strategic goals and objectives. For instance, restoring and protecting forested ecosystems is the Forest Service’s highest priority. However, more often than not, the MAR indicators do not provide any indication of the agency’s progress toward achieving this objective. For example, in its first annual performance plan developed under the Results Act, the Forest Service identifies three MAR indicators related to the three extended budget line items within the National Forest System appropriation that are available to forest and district offices to restore or protect forested ecosystems. (See table 3.2.) However, none of these MAR indicators provides a good measure of the agency’s progress toward achieving this objective. The primary objective of the activities relating to two of the three MAR indicators is to provide for a continuous supply of timber from the national forests, rather than to maintain or restore the health of the lands. For instance, timber stand improvement is defined by the Forest Service as “noncommercial cutting and other treatments made to increase the growth and improve the quality of trees for timber uses” and reforestation is defined as “treatments or activities that help to reestablish stands of trees after harvest.” The remaining MAR indicator is intended to measure a biological component of a forested ecosystem; that is, its wildlife. However, this indicator measures only the number of acres of terrestrial wildlife habitat restored or enhanced and not the agency’s progress toward accomplishing its stated objective of maintaining well-distributed viable populations of wildlife (the viability or viable populations requirement). Moreover, because the indicator is limited to wildlife, it does not measure the agency’s progress toward maintaining the diversity of other biological components of ecosystems, such as plant communities. The Forest Service’s September 30, 1997, 5-year strategic plan also identifies goals and objectives for goods and services on national forests, including providing quality recreational experiences. In addition, the Chief’s March 1998 natural resource agenda for the twenty-first century emphasizes recreation as one of only four key areas on which the Forest Service intends to focus its resources. However, of the six potential funding sources within the National Forest System’s appropriation that are available to forest and district offices to provide quality recreation, four did not have any MAR indicators relating specifically to recreation for fiscal year 1998. In addition, none of the fiscal year 1998 MAR indicators for the remaining two funding sources—recreation management and road maintenance—measures the agency’s progress toward providing quality recreational experiences. Rather than quality and outcomes, the MAR indicators measure quantity or such outputs as seasonal capacity available at developed facilities; the number of miles of roads and recreational trails; the number of permits “in existence” for private recreational cabins, special group events, and other noncommercial special uses; and the number of visitors to the forests. (See table 3.3.) Moreover, (1) the seasonal capacity available at developed facilities includes capacity that is not being maintained “to standard,” (2) the number of special use permits includes those not administered to standard but “on the books,” and (3) the total miles of Forest Service-managed roads includes those “less than fully maintained.” Thus, a substandard facility or an unmaintained road is counted as an accomplishment toward improving the level of customer satisfaction provided by recreational opportunities on national forests. In its fiscal year 1999 annual performance plan developed under the Results Act, the Forest Service stated that it is developing a new process—called “Meaningful Measures”—that will, among other things, (1) identify measurable components of the recreation program; (2) establish standards of quality for each component; and (3) monitor, measure, and report actual management attainment of the quality standards. The plan states that the process should be available in fiscal year 1999 for use in preparing the fiscal year 2000 performance plan. However, as noted by the Department of Agriculture’s Office of Inspector General in 1998, the process, which has been under development since at least 1994, (1) is still evolving, (2) has not been implemented, and (3) has not been integrated in the automated real property inventory and management system that the agency has been developing since 1993. Not only do the MAR indicators often measure quantity and outputs when they should be measuring quality and outcomes, they do not measure outputs consistently. A frequent complaint by officials we interviewed was that many of the MAR indicators are so broadly defined that two field units reporting identical accomplishments may have expended very different levels of effort and accomplished very different objectives. For example, for fiscal year 1998, one of the MAR indicators for both the Wildlife Habitat Management and the Threatened, Endangered, and Sensitive Species Habitat Management extended budget line items was the “total number of structures constructed.” However, according to agency officials, a structure can be as inexpensive as a wooden box for nesting ducks or as resource-intensive as a fish ladder to increase the number of adult fish migrating upstream. A field unit with few resources, yet eager to meet its performance targets, has an incentive to focus on less resource-intensive activities even though by focusing its efforts on one large project it might actually provide greater wildlife benefits. Finally, many Forest Service officials stated that MAR data are not reliable. They told us that they do not expend much effort to ensure the accuracy of the information they report. Moreover, no unit of the National Forest System that we visited systematically reviewed and audited the accuracy of the accomplishments reported by field and program staff, and some have developed so-called “cuff” records and reports that are unique to the units and cannot be combined and reported to the Congress and the public. In exchange for the greater flexibility granted to the Forest Service by the fiscal year 1995 budget reforms, the agency also agreed to implement a management cost and performance reporting system called All Resources Reporting that it has been developing since fiscal year 1988. The agency is uncertain when this system will be fully implemented. The system is intended to provide meaningful, useful, and reliable information on the National Forest System’s costs, revenues, accomplishments, and economic benefits to help meet the agency’s responsibilities for financial management and accomplishment reporting. To provide such information, the reporting system depends on both reliable financial data and adequate performance measures, neither of which the Forest Service currently has. All Resources Reporting is intended to be an integrated financial and accomplishment reporting system. It was designed to clearly display the relationship between expenditures associated with a program or activity in a national forest and the revenues collected or other outcomes or outputs resulting from that program or activity. In addition, it is to include socioeconomic information to help assess the annual social and economic benefits derived from a national forest. The system is comprised of a family of year-end financial statements and other reports intended to capture the benefits and costs of program management. However, the system and its statements and reports depend on accurate and complete financial and performance data, which the agency cannot provide. We have previously reported on shortcomings in the Forest Service’s information systems and accounting and financial data—such as the lack of reliable account balances for lands, buildings, and roads and the lack of detailed records to substantiate amounts that the agency either owes or is owed by others. These shortcomings preclude the Forest Service from presenting accurate and complete financial information. Because of the severity of the problems identified, we are monitoring and periodically reporting on the Forest Service’s effort to correct its accounting and financial reporting deficiencies. On the basis of our work, we believe that the earliest that the Congress may have assurance that the agency’s financial statements are reliable is when the Department of Agriculture’s Inspector General reports on the Forest Service’s fiscal year 2000 statements sometime in fiscal year 2001. To clearly display the relationship between expenditures and results, the All Resources Reporting system must also have adequate and complete performance data. However, to measure performance, the reporting system relies on the MAR indicators, which may be inadequate measures of the Forest Service’s accomplishments or progress toward achieving its goals and objectives. Moreover, while the agency has identified the actions required to correct known accounting and financial reporting deficiencies and has established a schedule to attain financial accountability within the next few years, it has not identified the actions required to correct the problems with its performance measures or established a schedule to achieve accountability for its performance by a certain date. The Forest Service’s management of the National Forest System has not appreciably changed as a result of the fiscal year 1995 budget reforms primarily because of two underlying causes—one relatively new and the other as old as the agency itself. New is the inability of the agency’s budget structure to keep pace with the Forest Service’s ongoing transition from an agency emphasizing consumption (primarily producing timber) to one emphasizing conservation (primarily sustaining wildlife and fish) and from an agency managing specific resources to one managing forested and other ecosystems. As a result, there is (1) currently no clear link between the agency’s ecosystem-based strategic goals and objectives and the resource-specific National Forest System line items in its budget and (2) some confusion within the agency in identifying the program that will benefit most from a project so that costs can be consistently charged to that program. As old as the Forest Service itself is the agency’s highly decentralized organizational structure and the considerable autonomy and discretion that field and program managers have in interpreting and applying the agency’s policies and directions. As in the past, (1) implementation of the fiscal year 1995 reforms within the Forest Service’s hierarchical organization has been left to the discretion of regional, forest, and district offices with uneven and mixed results and (2) there has been no consequences associated with making a certain decision and no responsibility fixed for attaining a particular result. The broad discretion that the Forest Service has given its field and program managers has resulted in, among other things, (1) some forest and district offices continuing to distribute and track funds as if the reforms had not occurred, (2) some field managers redistributing work charged to other activities after the fact in order to achieve or maintain specific levels of funding within activities, and (3) programs without the funds needed to pay for a project’s support services requiring other programs that are providing the support to absorb the costs of the services rather than seeking to meet their needs by moving funds between line items or by requesting a reprogramming of funds by the Chief of the Forest Service or the House and Senate Appropriations Committees. Moreover, the Forest Service has not fulfilled its part of the “quid pro quo” with the Congress that resulted from the fiscal year 1995 budget reforms. Although the Appropriations Committees gave the agency increased flexibility over its budget, the Forest Service has not provided the Committees with the improved accountability that they requested. Currently, there is no clear link between the Forest Service’s strategic goals and objectives and its budget allocation criteria and performance measures. Rather than develop new criteria and measures and improve existing ones to better align them with its mission and funding priorities, the agency is trying to use old resource-specific allocation criteria and performance measures with its new integrated-resource goals and objectives. The disconnect between the Forest Service’s strategic goals and objectives and its performance measures and the inadequacy of the measures themselves become even more critical because the management cost and performance reporting system, which the agency has been developing since 1988, uses the performance measures to display the relationship between expenditures and results. Inadequate and unreliable performance measures that are also not linked to the agency’s strategic goals and objectives will be used to report accomplishments in achieving those goals and objectives. As a result, the Forest Service, the Congress, and other interested parties do not have an adequate measure of the agency’s funding needs or its progress toward achieving its goals and objectives. While further changes to the Forest Service’s budget structure seem to be warranted to facilitate management of the 155 national forests, we believe that any future revisions should coincide with, rather than precede, actions required to correct known accounting and financial reporting deficiencies as well as problems with performance-related data, measurement, and reporting. However, the Forest Service has not established a schedule to achieve accountability for its performance and is uncertain when its management cost and performance reporting system will be fully implemented. A firm schedule is needed so that the agency can demonstrate progress toward becoming more accountable for its performance and results. Developing and implementing a firm schedule to correct identified management deficiencies and to achieve performance accountability will require strong leadership within the agency and sustained oversight by the Congress to make clear the demarcation between the discretion that regional, forest, and district offices have in managing their lands and resources and the need to strictly adhere to the agency’s policies and requirements relating to financial and performance accountability. The April 1998 restructuring of the Forest Service’s management team that placed responsibility for fiscal and business management under a Chief Operating Officer who reports directly to the Chief of the Forest Service may provide the needed leadership. To improve the Forest Service’s accountability for results, we recommend that the Secretary of Agriculture direct the Chief of the Forest Service to (1) revise the agency’s budget structure, budget allocation criteria, and performance measures to better link them to the Forest Service’s strategic goals and objectives and (2) incorporate the new performance measures into the management cost and performance reporting system that the agency is developing. Moreover, to help ensure that the budget allocation criteria and performance measures are revised and the management cost and performance reporting system is implemented in a timely manner, we recommend that the Secretary direct the Chief to establish a firm schedule to achieve performance accountability. We provided copies of a draft of this report to the Forest Service for its review and comment. The agency’s comments, together with our responses to them appear in appendix I. The Forest Service generally agreed with the report’s findings and recommendations. However, it believed that improvements to its budget structure should be made concurrent with, rather than after, improvements to its budget allocation criteria, performance measures, and reporting systems as suggested in the draft report. We agree with the Forest Service that a piecemeal approach to correcting known accounting and financial reporting deficiencies and performance-related problems will not work and have revised the report accordingly. The Forest Service also provided comments on the factual content of the report, and changes were made as appropriate. The following are GAO’s comments on the Forest Service’s October 23, 1998, letter. 1. We have revised the report to clarify that (1) since fiscal year 1995, funds have generally been budgeted and allocated at the agency’s Washington and regional offices consistent with the budget reforms, (2) implementation of the reforms at the forest and district offices has been left to the discretion of field and program managers, and (3) some forest and district offices continue to distribute and track funds as if the reforms had not occurred. 2. We agree with the Forest Service that a piecemeal approach to correcting known accounting and financial reporting deficiencies and performance-related problems will not work and have revised the report to state that improvements to the agency’s budget structure should be made concurrent with, rather than after, improvements to its budget allocation criteria, performance measures, and reporting systems. Jean Brady Marcus R. Clark, Jr. Charles S. Cotton Doreen S. Feldman Angela M. Sanders The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Forest Service's implementation of fiscal year (FY) 1995 budget reforms, focusing on the progress that the agency has made toward becoming more accountable for its results. GAO noted that: (1) the Forest Service's management of the National Forest System has not appreciably changed as a result of the increased flexibility offered by the FY 1995 budget reforms; (2) consolidating the line items was intended to provide field managers with greater discretion in deciding where to spend funds to better achieve the agency's goals and objectives; (3) however: (a) some field offices have continued to distribute and track funds as if the consolidation had not occurred; and (b) the budget is still structured primarily by individual resource-specific programs; (4) the reforms expanded the Forest Service's authority to move funds between line items without the appropriations committees' approval; (5) the agency has seldom requested such approval either before or after the reforms; (6) the agency submitted one or two requests a year for the Appropriations committees' approval to move funds among line items for the National Forest System in fiscal years 1994 through 1997; (7) the reforms have not had a noticeable impact on the number of times that the Forest Service has made such requests of the committees; (8) the reforms restructured the agency's budget so that all the funding for a project is consolidated in the program that will benefit most from the project; (9) however, a benefitting program may not have the funds needed to implement a project; (10) it may require other programs that are providing support services to absorb the costs of those services; (11) this practice circumvents the requirements established by the appropriations committees and the agency to move funds between line items and understates a project's costs; (12) the Forest Service has not provided Congress with the improved accountability that the appropriations committees requested when they gave the agency increased flexibility over its budget; (13) GAO found that: (a) the agencywide criteria developed by the Forest Service to allocate appropriated funds to its regions and forests are often not linked to its strategic goals and objectives; (b) the agency's performance measures do not adequately reflect its accomplishments or progress toward achieving the goals and objectives; and (c) the management cost and performance reporting system that the agency has been developing for over 10 years uses the inadequate performance measures as input; and (14) the Forest Service, Congress, and others do not have an adequate measure of the agency's funding needs or its progress toward achieving its goals and objectives.
Medical devices can range in complexity from a simple tongue depressor to a sophisticated CT (computed tomography) x-ray system. Most of the devices reach the market through FDA’s premarket notification (or 510(k)) review process. Under its 510(k) authority, FDA may determine that a device is substantially equivalent to a device already on the market and therefore not likely to pose a significant increase in risk to public safety. When evaluating 510(k) applications, FDA makes a determination regarding whether the new device is as safe and effective as a legally marketed predicate device. Performance data (bench, animal, or clinical) are required in most 510(k) applications, but clinical data are needed in less than 10 percent of applications. An alternative mode of entry into the market is through the premarket approval (PMA) process. PMA review is more stringent and typically longer than 510(k) review. For PMAs, FDA determines the safety and effectiveness of the device based on information provided by the applicant. Nonclinical data are included as appropriate. However the answers to the fundamental questions of safety and effectiveness are determined from data derived from clinical trials. FDA also regulates research conducted to determine the safety and effectiveness of unapproved devices. FDA approval is required only for “significant risk” devices. Applicants submit applications for such devices to obtain an investigational device exemption (IDE) from regulatory requirements and approval to conduct clinical research. For an IDE, unlike PMAs and 510(k)s, it is the proposed clinical study that is being assessed—not just the device. Modifications of medical devices, including any expansion of their labeled uses, are also subject to FDA regulation. Applications to modify a device that entered the market through a PMA are generally linked to the original PMA application and are called PMA supplements. In contrast, modifications to a 510(k) device are submitted as new 510(k) applications. References may be made to previous 510(k) applications. FDA uses several measures of duration to report the amount of time spent reviewing applications. In this letter, we use only three of those measures. The first is simply the time that elapses between FDA’s receipt of an application and its final decision on it (total elapsed time). The second measure is the time that FDA has the application under its review process (FDA time). This includes both the time the application is under active review and the time it is in the FDA review queue. The amount of time FDA’s review process has been suspended, waiting for additional information from the applicant, is our third measure (non-FDA time). Our measures of review time are not intended to be used to assess the agency’s compliance with time limits for review established under the Federal Food, Drug, and Cosmetic Act (the act). The time limits for PMA, 510(k), and IDE applications are 180, 90, and 30 days, respectively. FDA regulations allow for both the suspension and resetting of the FDA review clock under certain circumstances. How review time is calculated differs for 510(k)s and PMAs. If a PMA application is incomplete, depending on the extent of the deficiencies, FDA may place the application on hold and request further information. When the application is placed on hold, the FDA review clock is stopped until the agency receives the additional information. With minor deficiencies, the FDA review clock resumes running upon receipt of the information. With major deficiencies, FDA resets the FDA clock to zero upon receipt of the information. In this situation, all previously accrued FDA time is disregarded. (The resetting of the FDA clock can also be triggered by the applicant’s submission of unsolicited supplementary information.) The amount of time that accrues while the agency is waiting for the additional information constitutes non-FDA time. For 510(k)s, the FDA clock is reset upon receipt of a response to either major or minor deficiencies. For this report, we define FDA time as the total amount of time that the application is under FDA’s review process. That is, our measure of FDA time does not include the time that elapses during any suspension, but does include time that elapsed before the resetting of the FDA clock. The total amount of time that accrues while the agency is waiting for additional information constitutes non-FDA time. (The sum of FDA and non-FDA time is our first measure of duration—total elapsed time.) The act establishes three classes of medical devices, each with an increasing level of regulation to ensure safety and effectiveness. The least regulated, class I devices, are subject to compliance with general controls. Approximately 40 percent of the different types of medical devices fall into class I. At the other extreme is premarket approval for class III devices, which constitute about 12 percent of the different types of medical devices. Of the remainder, a little over 40 percent are class II devices, and about 3 percent are as yet unclassified. In May 1994, FDA implemented a three-tier system to manage its review workload. Classified medical devices are assigned to one of three tiers according to an assessment of the risk posed by the device and its complexity. Tier 3 devices are considered the riskiest and require intensive review of the science (including clinical data) and labeling. Review of the least risky devices, tier 1, entails a “focused labeling review” of the intended use. In addition to the three tiers is a group of class I devices that pose little or no risk and were exempted from the premarket notification (510(k)) requirements of the act. Under the class and tier systems, approximately 20 percent of the different types of medical devices are exempted from premarket notification. A little over half of all the different types of medical devices are classified as tier 2 devices. Tiers 1 and 3 constitute 14 and 12 percent of the different types of medical devices, respectively. From 1989 through 1991, the median time between the submission of a 510(k) application and FDA’s decision (total elapsed time) was relatively stable at about 80 to 90 days. The next 2 years showed a sharp increase that peaked at 230 days in 1993. Although the median review time showed a decline in 1994 (152 days), it remained higher than that of the initial 3 years. (See figure 1.) Similarly, the mean also indicated a peak in review time in 1993 and a subsequent decline. The mean review time increased from 124 days in 1989 to 269 days in 1993. In 1994, the mean dropped to 166 days; however, this mean will increase as the 13 percent of the applications that remained open are closed. (See table II.1.) Of all the applications submitted to FDA to market new devices during the period under review, a little over 90 percent were for 510(k)s. Between 1989 and 1994, the number of 510(k) applications remained relatively stable, ranging from a high of 7,023 in 1989 to a low of 5,774 in 1991. In 1994, 6,446 applications were submitted. Of the 40,950 510(k) applications submitted during the period under review, approximately 73 percent were determined to be substantially equivalent. (That is, the device is equivalent to a predicate device already on the market and thus is cleared for marketing.) Only 2 percent were found to be nonequivalent, and 6 percent remained open. Other decisions—including applications for which a 510(k) was not required and those that were withdrawn by the applicant—account for the rest. (See appendix I for details on other FDA decision categories.) For applications determined to be substantially equivalent, non-FDA time—the amount of time FDA placed the application on hold while waiting for additional information—comprised almost 20 percent of the total elapsed time. (See table II.7.) Figure 2 displays FDA and non-FDA time to determine equivalency for 510(k) applications. The trends in review time differed for original PMAs and PMA supplements. There was no clear trend in review times for original PMA applications using either medians or means since a large proportion of the applications had yet to be completed. The median time between the submission of an application and FDA’s decision (total elapsed time) fluctuated from a low of 414 days in 1989 to a high of 984 days in 1992. Less than 50 percent of the applications submitted in 1994 were completed; thus, the median review time was undetermined. (See figure 3.) Except for 1989, the means were lower than the medians because of the large number of open cases. The percent of applications that remained open increased from 4 percent in 1989 to 81 percent in 1994. The means, then, represent the time to a decision for applications that were less time-consuming. When the open cases are completed, lengthy review times will cause an increase in the means. (See table III.1.) For PMA supplements, the median time ranged from 126 days to 173 days in the first 3 years, then jumped to 288 days in 1992. In 1993 and 1994, the median declined to 242 and 193 days, respectively. (See figure 4.) This trend was reflected in the mean review time that peaked at 336 days in 1992. Although the mean dropped to 162 days in 1994, this is expected to increase because 21 percent of the applications had not been completed at the time of our study. (See table III.7.) Applications for original PMAs made up less than 1 percent of all applications submitted to FDA to market new devices in the period we reviewed. PMA supplements comprised about 8 percent of the applications. The number of applications submitted for PMA review declined each year. In 1989, applications for original PMAs numbered 84. By 1994, they were down to 43. Similarly, PMA supplements decreased from 804 in 1989 to 372 in 1994. (See tables III.1 and III.7.) Of the 401 applications submitted for original PMAs, 33 percent were approved, 26 were withdrawn, and nearly a third remained open. The remainder (about 9 percent) fell into a miscellaneous category. (See appendix I.) A much higher percentage of the 3,640 PMA supplements (78 percent) were approved in this same period, and fewer PMA supplements were withdrawn (12 percent). About 9 percent of the applications remained open, and 2 percent fell into the miscellaneous category. For PMA reviews that resulted in approval, non-FDA time constituted approximately one-fourth of the total elapsed time for original PMAs and about one-third for PMA supplements. The mean FDA time for original PMAs ranged from 155 days in 1994 to 591 days in 1992. Non-FDA times for those years were 34 days in 1994 and 165 days in 1992. For PMA supplements, FDA review times were lower, ranging from a low of 105 days (1990) to a high of 202 days (1992). Non-FDA time for those years were 59 days (1990) and 98 days (1992), respectively. (See table III.13.) Figures 5 and 6 display the proportion of FDA and non-FDA time for the subset of PMAs that were approved. For IDEs, the mean review time between submission and FDA action was 30 days, and it has not changed substantially over time. Unlike 510(k)s and PMAs, IDEs are “deemed approved” if FDA does not act within 30 days. Of the 1,478 original IDE submissions from fiscal year 1989 to 1995, 33 percent were initially approved (488) and 62 percent were denied or withdrawn (909). The number of IDE submissions each year ranged from a high of 264 in 1990 to a low of 171 in 1994. (See table IV.1.) Our objective was to address the following general question: How has the time that 510(k), PMA, and IDE applications spend under FDA review changed between fiscal year 1989 and the present? To answer that question, we also looked at a subset of applications that were approved, distinguishing the portion of time spent in FDA’s review process (FDA time) from that spent waiting for additional information (non-FDA time). For applications that were approved, we present the average number of amendments that were subsequently added to the initial application as well as the average number of times FDA requested additional information from the applicant. (Both of these activities affect FDA’s review time.) We used both the median and mean to characterize review time. We use the median for two reasons. First, a large proportion of the applications have yet to be completed. Since the median is the midpoint when all review times are arranged in consecutive order, its value can be determined even when some applications requiring lengthy review remain open. In contrast, the mean can only be determined from completed applications. (In this case, applications that have been completed by May 18, 1995.) In addition, the mean will increase as applications with lengthy reviews are completed. To illustrate, for applications submitted in 1993, the mean time to a decision was 269 days for 510(k) applications that have been closed. However, 3 percent of the applications have yet to be decided. If these lengthy reviews were arbitrarily closed at May 18, 1995 (the cutoff date for our data collection), the mean would increase to 285 days. In contrast, the median review time (230 days) would remain the same regardless of when these open applications were completed. The second reason for using the median is that the distributions of review time for 510(k), original PMA, and PMA supplement applications are not symmetric, that is, having about the same number of applications requiring short reviews as lengthy reviews. The median is less sensitive to extreme values than the mean. As a result, the review time of a single application requiring an extremely lengthy review would have considerably more effect on the mean than the median. Figure 7 shows the distribution for 510(k)s submitted in 1993, the most recent year in which at least 95 percent of all 510(k) applications had been completed. The distribution is skewed with a mean review time of 269 days and a median review time of 222 days for all completed applications. Mean = 269 Median = 222 To provide additional information, we report on the mean review times as well as the median. The discrepancy between the two measures gives some indication of the distribution of review time. When the mean is larger than the median, as in the case of the 510(k)s above, it indicates that a group of applications required lengthy reviews. Another reason we report the means is that, until recently, FDA reported review time in terms of means. In appendix I, we provide the categories we used to designate the different FDA decisions and how our categories correspond to those used by FDA. Detailed responses to our study objective are found in tabular form in appendixes II, III, and IV for 510(k)s, PMAs, and IDEs, respectively. We report our findings according to the fiscal year in which the applications were submitted to FDA. By contrast, FDA commonly reports review time according to the fiscal year in which the review was completed. Although both approaches measure review time, their resultant statistics can vary substantially. For example, several complex applications involving lengthy 2-year reviews submitted in 1989 would increase the average review time for fiscal year 1989 in our statistics and for fiscal year 1991 in FDA’s statistics. Consequently, the trend for review time based on date-of-submission cohorts can differ from the trend based on date-of-decision cohorts. (See appendix V for a comparison of mean review time based on the two methods.) The two methods provide different information and are useful for different purposes. Using the date-of-decision cohort is useful when examining productivity and the management of resources. This method takes into consideration the actual number of applications reviewed in a given year including all backlogs from previous years. Alternatively, using the date-of-submission cohort is useful when examining the impact of a change in FDA review policy, which quite often only affects those applications submitted after its implementation. To minimize the effect of different policies on review time within a cohort, we used the date-of-submission method. We conducted our work in accordance with generally accepted government auditing standards between May and June 1995. Officials from FDA reviewed a draft of this report and provided written comments, which are reproduced in appendix VI. Their technical comments, which have been incorporated into the text where appropriate, have not been reprinted in the appendix. FDA believed that the report misrepresented the current state of the program as the draft did not acknowledge recent changes in the review process. FDA officials suggested a number of explanations for the apparent trends in the data we reported (see appendix VI). Although recent initiatives to improve the review process provide a context in which to explain the data, they were outside the scope of our work. We were not able to verify the effect these changes have actually had on review time. To the extent that these changes did affect review time, they are reflected in the review times as presented and are likely to be reflected in future review times. The agency also believed that the draft did not reflect the recent improvements in review time. We provided additional measures of review time in order to present the review times for the more recent years. We have also included more information on the difference between the date-of-submission and date-of-decision cohorts, and we have expanded our methodological discussion in response to points FDA made on the clarity of our presentation. (Additional responses to the agency comments are included in appendix VI.) As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date of issue. We will then send copies to other interested congressional committees, the Secretary of the Department of Health and Human Services, and the Commissioner of Food and Drugs. Copies will also be made available to others upon request. If you or your staff have any questions about this report, please call me at (202) 512-3092. The major contributors to this report are listed in appendix VII. FDA uses different categories to specify the type of decision for 510(k)s, PMAs, and IDEs. For our analysis, we collapsed the multiple decision codes into several categories. The correspondence between our categories and FDA’s are in table I.1. Additional information requested; applicant cannot respond within 30 days Drug (CDER) review required (continued) The following tables present the data for premarket notifications, or 510(k)s, for fiscal years 1989 through May 18, 1995. The first set of tables (tables II.1 through II.6) presents the time to a decision—from the date the application is submitted to the date a decision is rendered. We first present a summary table on the time to a decision by fiscal year (table II.1). The grand total for the number of applications includes open cases—that is, applications for which there had not been any decision made as of May 18, 1995. As the distribution for time to a decision is not symmetric (see figure 1 in the letter), we present the means and percentiles to characterize the distribution. (The means and percentiles do not include open cases.) The second table is a summary of the time to a decision by class, tier, medical specialty of the device, and reviewing division (table II.2). The next four tables (II.3 through II.6) provide the details for these summary tables. The totals in these tables include only applications for which a decision has been rendered. The class, tier, and medical specialty of some of the devices have yet to be determined and are designated with N/A. Medical specialties other than general hospital or general and plastic surgery include anesthesiology; cardiovascular; clinical chemistry; dental; ear, nose, and throat; gastroenterology/urology; hematology; immunology; microbiology; neurology; obstetrics/gynecology; ophthalmic; orthopedic; pathology; physical medicine; radiology; and clinical toxicology. The five reviewing divisions in FDA’s Center for Devices and Radiological Health are Division of Clinical Laboratory Devices (DCLD); Division of Cardiovascular, Respiratory and Neurological Devices (DCRND); Division of General and Restorative Devices (DGRD); Division of Ophthalmic Devices (DOD); and Division of Reproductive, Abdominal, Ear, Nose and Throat, and Radiological Devices (DRAER). The second set of tables (tables II.7 through II.12) presents the mean time to determine equivalency. We provide the means for total FDA time, non-FDA time, and total elapsed time. FDA time is the total amount of time the application was under FDA review including queue time—the time to equivalency without resetting the FDA review clock. The total elapsed time, the duration between the submission of the application and FDA’s decision, equals the sum of the FDA and non-FDA time. We deleted cases that had missing values or apparent data entry errors for the values relevant to calculating FDA and non-FDA time. Therefore, the total number of applications determined to be equivalent in this group of tables differs from that in the first set. Again, we have two summary tables, followed by four tables providing time to determine equivalency by class, tier, medical specialty, and reviewing division (tables II.7 through II.12). In reviewing a PMA application, FDA conducts an initial review to determine whether the application contains sufficient information to make a determination on its safety and effectiveness. A filing decision is made—filed, filed with deficiencies specified, or not filed—based on the adequacy of the information submitted. The manufacturer is notified of the status of the application at this time, especially since deficiencies need to be addressed. As part of the substantive review, a small proportion of PMA applications are also reviewed by an advisory panel. These panels include clinical scientists in specific medical specialties and representatives from both industry and consumer groups. The advisory panels review the applications and provide recommendations to the agency to either approve, deny, or conditionally approve them. FDA then makes a final determination on the application. To examine in greater detail those cases where the intermediate milestones were applicable, we calculated the average duration between the various dates—submission, filing, panel decision, and final decision. The number of applications differs for each of the milestones as not all have filing or panel dates. (See figure III.1.) The following tables present information on review time for PMA applications for fiscal years 1989 through 1995. Original PMA applications are distinguished from PMA supplements. Some observations were deleted from our data because of apparent data entry errors. The first set of tables (tables III.1 through III.6) presents the time to a decision for original PMAs—from the date the application is submitted to the date a decision is rendered. The second set of tables (tables III.7 through III.12) provides similar information, in the same format, for PMA supplements. We first present a summary table on the time to a decision by fiscal year (tables III.1 and III.7). Again, the grand total for the number of applications includes the number of open cases—that is, applications for which there had not been any decision made as of May 18, 1995. As with 510(k)s, the distributions of time to a decision for original PMAs and PMA supplements are not symmetric. Thus we report means and percentiles to characterize these distributions. (These means and percentiles do not include open cases.) Figure III.2 presents the distribution for original PMAs submitted in 1989, the most recent year for which at least 95 percent of the applications had been completed. Figure III.3 presents the distribution for PMA supplements submitted in 1991, the most recent year with at least a 95-percent completion date. The second table is a summary of the time to a decision by class, tier, relevant medical specialty of the device, and reviewing division (tables III.2 and III.8). The two summary tables are followed by four tables (tables III.3 through III.6 and III.9 through III.12) presenting the details by class, tier, medical specialty, and reviewing division. The totals in these tables include only applications for which a decision has been rendered. The class, tier, and medical specialty of some of the devices have yet to be determined and are designated with N/A. Medical specialities other than cardiovascular or ophthalmic include anesthesiology; clinical chemistry; dental; ear, nose, and throat; gastroenterology/urology; general and plastic surgery; general hospital; hematology; immunology; microbiology; neurology; obstetrics/gynecology; orthopedic; pathology; physical medicine; radiology; and clinical toxicology. The third set of tables provides information on the time to an approval, for both original PMAs and PMA supplements (tables III.13 through III.18). Four different measures of duration are provided—total FDA time, non-FDA time, total elapsed time, and FDA review time. Total FDA time is the amount of time the application is under FDA’s review process. Non-FDA time is the time the FDA clock is suspended waiting for additional information from the applicant. The total elapsed time, the duration from the date the application is submitted to the date of FDA’s decision, equals the sum of total FDA and non-FDA time. FDA review time is FDA time for the last cycle—excluding any time accrued before the latest resetting of the FDA clock. Again, we first provide a summary table for time to an approval by fiscal year (table III.13). In this table, we also provide the number of amendments or the number of times additional information was added to the initial submission. Not all amendments were for information requested by FDA as can be seen from the number of requests for information. Table III.13 is followed by a summary by class, tier, medical specialty, and reviewing division (table III.14). Tables III.15 though III.18 provide the details for these two summary tables. The following tables present the average days to a decision for investigational device exemptions. The first table presents the averages for the years from October 1, 1988, through May 18, 1995. This is followed by summaries by class, tier, medical specialty, and then reviewing division. The next four tables (tables IV.3 through IV.6) provide the details for these summary tables. We reported our findings according to the fiscal year in which the applications were submitted to FDA (date-of-submission cohort). By contrast, FDA commonly reports review time according to the fiscal year in which the review was completed (date-of-decision cohort). This led to discrepancies between our results and those reported by FDA. The following table illustrates the differences in calculating total elapsed time by the year that the application was submitted and the year that a decision was rendered. Comparisons are provided for 510(k)s, PMA supplements, original PMAs, and IDEs. Our dataset did not include applications submitted before October 1, 1988. Consequently, the results presented in the following table understated the number of cases, as well as the elapsed time, when calculated by the year of decision. That is, an application submitted in fiscal year 1988 and completed in 1989 would not have been in our dataset. The following are GAO’s comments on the August 2, 1995, letter from FDA. 1. The purpose of our review was to provide to FDA’s congressional oversight committee descriptive statistics on review time for medical device submissions between 1989 and May 1995. It was not to perform an audit of whether FDA was in compliance with statutory review time, nor to examine how changes in FDA management practices may have resulted in shortening (or lengthening) review times. FDA officials suggested that a number of process changes and other factors may have contributed to the trends we reported—for example, the increased complexity of the typical submission that resulted from the agency’s exemption from review of certain low-risk devices. We are not able to verify the effect changes have actually had on review time, and it may be that it is still too early for their impact to be definitively assessed. 2. In discussing our methodology in the draft report, we noted the differences between FDA’s typical method of reporting review time according to the year in which action on applications is finalized, as opposed to our method of assigning applications to the year in which they were submitted. We also included an appendix that compares the results of the two different approaches. (See appendix V.) We agree with FDA that it is important for the reader to understand these differences and have further expanded our discussion of methodology to emphasize this point. (See p. 14.) 3. We agree with FDA that our report “deals only with calculations of averages and percentiles”—that is, with means, medians (or 50th percentile), as well as the 5th and 95th percentiles. However, FDA’s suggested additions do not extend beyond such descriptive statistics. We also agree that mean review times in the presence of numerous open cases may not be meaningful. For this reason, we have included open cases in our tables that report review time, but we have excluded them from the calculation of means. FDA suggests that we include open cases in our calculation of medians. We have adopted this suggestion and presented our discussion of trends in terms of the median review time for all cases. It should be noted, however, that including open cases increases our estimate of review time. (For example, including open cases raises the calculation of 510(k) median review time from the 126 days we reported for 1994 to 152 days.) Figure VI.1 depicts the relationship among the three measures of elapsed time for 510(k) submissions: the mean of closed cases, the median of closed cases, and the median of all cases. The two measures of closed cases reveal roughly parallel trends, with median review time averaging some 45 days fewer than mean review time. The two estimates of median review time are nearly identical from 1989 through 1990 since there are very few cases from that period that remain open. The divergence between the two medians increases as the number of open cases increases in recent years until 1995, when the median, including open cases, is larger than the mean of closed cases. Mean (Closed Cases) Median (Closed Cases) Median (All Cases) 4. While we are unable to reproduce the calculations performed by FDA, we agree in general with the trends indicated by FDA. Specifically, Our calculations, as presented in our draft report tables II.7 and following, showed a decrease from 1993 to 1994 in FDA review time for finding a 510(k) submission substantially equivalent. By our calculation, this declined from a mean of 173 days in 1993 to 100 days in 1994. The proportion of 510(k) applications reaching initial determination within 90 days of submission increased from 15.8 percent in 1993 to 32 percent in 1994 and 57.9 percent between October 1, 1994, and May 18, 1995. Clearly, since 1993, more 510(k) cases have been determined within 90 days, and the backlog of undetermined cases has been reduced. Because a review of the nature and complexity of the cases still open was beyond the scope of this study, we cannot predict with certainty whether, when these cases are ultimately determined, average review time for 1995 cases will be shorter than for cases submitted in 1993. 5. FDA time was reported in our draft report tables II.7 through II.12, and findings contrasting the differences between FDA time and non-FDA time were also included. Additional language addressing this distinction has been included in the text of the report. 6. FDA’s contends that 1989 was an atypical year for 510(k) submissions and therefore a poor benchmark. However, we do not believe that starting our reporting in 1989 introduced any significant bias into our report of the 510(k) workload. Indeed, our draft report concluded that the number of 510(k) submissions had “remained relatively stable” over the 1989-94 period. If we had extrapolated the data from the first 7-1/2 months of 1995 to a full year, we would have concluded that the current fiscal year would have a substantially lower number of 510(k) submissions (16 percent to 31 percent) than any of the previous 6 years. 7. The tier classification was created by FDA to manage its review workload; however, it was not our intention to evaluate or in any way assess the use of tiers for such purposes. The tier classification was based on “the potential risk and complexity of the device.” Accordingly, both class and tier provide a rough indication of a device’s complexity. 8. We agree that our draft report aggregated original PMA submissions and PMA supplements in summarizing its findings. We have now disaggregated PMA statistics throughout. 9. We interpret the figures presented by FDA to represent the mean number of days elapsed between receipt (or filing) of a PMA submission and a given month for cases that have not been decided. We agree with FDA that the average review time for open original PMAs does not appear to have increased substantially since the beginning of calendar 1994 and that the average review time has decreased for PMA supplements since late 1994. Decreasing these averages is the product of either an increasing number of new cases entering the system or of closing out older cases in the backlog or both. Since the number of PMAs (originals and supplements) submitted in recent years has declined, the evidence suggests that the drop in average time for pending PMA supplements resulted from eliminating lengthy backlogged cases. 10. As noted earlier, assessing the impact of specific management initiatives is beyond the scope of this report. However, we do agree with FDA that the approval rate for initial IDE submissions doubled between 1994 and 1995; by our calculations, it increased from 25 percent to 54 percent. We have not independently examined the total time to approval for all IDEs. Robert E. White, Assistant Director Bertha Dong, Project Manager Venkareddy Chennareddy, Referencer Elizabeth Scullin, Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Food and Drug Administration's (FDA) review of medical devices, focusing on how FDA review time has changed from fiscal year 1989 to May 18, 1995. GAO found that: (1) FDA review times for medical device applications remained stable from 1989 to 1991, increased sharply in 1992 and 1993, and dropped in 1994; (2) in 1994, the median review time for 510(k) applications was 152 days, which was higher than the median review time during 1989 through 1991; (3) the review time trend for original premarket approval (PMA) applications was unclear because many applications remained open; (4) the median review time for original PMA applications peaked at 984 days in 1992; (5) the review time trend for supplementary PMA applications fluctuated slightly in the first 3 years, peaked in 1992, and declined to 193 days in 1994; (6) in many instances, FDA placed 501(k) applications on hold while waiting for additional information, which comprised almost 20 percent of its total elapsed review time; and (7) the mean review time for investigational device exemptions was 30 days.
IRS and state revenue offices are both charged with responsibility for collecting taxes. More than half of the states have based their income tax systems on the federal tax system, with an overlap of many taxpayers. For the most part, this common customer base is dealt with separately by IRS and the state agencies. Given their common roles and customer bases, opportunities for collaboration among IRS and states’ revenue offices exist. IRS is facing budget reductions and downsizing. Because of decreasing resources, it becomes even more important to identify ways that IRS and the states can cooperate to improve efficiencies and maximize their return on investment. IRS and the states have been involved in cooperative tax administration efforts since the 1920s. By engaging in cooperative efforts, state agencies and the federal government have attempted to achieve greater compliance and efficiency than they could by working separately. Early cooperative efforts involved the sharing of taxpayer income and tax liability information. In 1957, these activities became governed by formal agreements between IRS and the states to specify the types of tax information to be shared. In 1978, IRS fixed responsibility for the exchange of federal and state tax information with the disclosure officers in its regional and district offices. IRS also charged its district directors with responsibility for working personally with state tax agencies to establish and conduct FedState cooperative projects. In 1991, the Office of FedState Relations was established in the National Office to facilitate cooperative tax administration and foster joint projects. IRS originally assigned a senior executive and five staff to this office. IRS chose to not provide full-time field staff to facilitate and foster projects. District directors continued to be responsible for liaison and personal involvement. Most disclosure officers were assigned responsibility for coordinator and facilitator duties on a part-time basis. As of November 1995, 49 states were participating in the FedState program, and, according to IRS officials, approximately 600 to 700 projects were ongoing or proposed. In recent years, IRS and the Department of the Treasury have drafted and proposed legislation to further the FedState program. In June 1995, the President announced that he would submit to Congress proposed legislation to facilitate additional FedState cooperative efforts to streamline tax administration, such as joint filing and processing of return information. The proposed legislation would allow IRS and state taxing agencies to delegate tax administration powers and compensate one another pursuant to agreements. The most recent version of the legislation was submitted to Congress in March 1996. No action has been taken yet. In 1978 and 1985, we issued reports on the FedState program. The 1978 report to the Joint Committee on Taxation concluded that the program had a low priority within IRS and had no unified direction because responsibility for the program was not fixed. In response, IRS assigned program responsibility to the Office of Disclosure Operations. Both the 1978 and 1985 reports concluded that IRS and the states were not using much of their exchanged data and were not sharing other potentially useful information. In response, IRS established reviews to determine if states needed and used the confidential return information provided by IRS. Our review of the FedState program arose from a December 9, 1994, hearing on compliance costs and taxpayer burden held by the Subcommittee on Oversight of the House Committee on Ways and Means. At that hearing, the Subcommittee expressed interest in how the states and the federal government could work together to reduce taxpayer burden. Our objectives for this report were to (1) identify the potential benefits of FedState cooperative efforts; (2) determine what, if any, conditions may impede the success of the program; and (3) determine what, if any, FedState program concerns the states have with IRS’ planned reorganization. To achieve our interrelated objectives, we interviewed IRS officials responsible for the FedState program in IRS’ national and southeast regional offices, as well as its Albany, NY; Atlanta, GA; Baltimore, MD; Columbia, SC; Phoenix, AZ; and St. Paul, MN, district offices. We interviewed state revenue department officials knowledgeable of FedState activities in Arizona, Georgia, Maryland, Minnesota, New York, and South Carolina. These locations were selected on the basis of their proximity to our offices or because IRS officials said they were characterized by a high level of FedState activity. We also (1) reviewed FedState documents, such as the 1994 FedState Cooperative Ventures Catalog and the FedState Concept of Operations Report from IRS, and program reports from state department of revenue offices we visited; (2) collected detailed information, such as project descriptions and any data on costs and benefits, on FedState projects in the states we visited; and (3) reviewed various legislative proposals related to FedState activities. We interviewed Federation of Tax Administrators (FTA) officials knowledgeable of the FedState program. FTA represents state tax administrators and is actively involved in promoting effective working relationships among IRS and state tax agencies. We held a group discussion with and surveyed state tax administrators on their views regarding cooperative FedState efforts at the June 1995 FTA conference in Cleveland, OH. Participation in the discussion and survey was voluntary. Our work was done between January 1995 and April 1996 in accordance with generally accepted government auditing standards. We provided a draft of this report to the Commissioner of Internal Revenue and the Executive Director, FTA, for their comments. We met with FTA on August 15, 1996, and with IRS officials on September 4, 1996, to discuss this report. Their comments are summarized and evaluated beginning on page 13 and incorporated into this report where appropriate. Due to the similarities in the functions of IRS and state revenue departments, numerous opportunities exist to improve tax administration efficiencies through FedState cooperative efforts. For example, taxpayer data tape exchanges can improve compliance and enforcement by enabling IRS and the states to identify noncompliant taxpayers and take appropriate action. Similarly, joint federal and state taxpayer education and assistance efforts can reduce taxpayer burden by making it easier for taxpayers to obtain information about tax requirements. More sophisticated technology provides additional ways for IRS and the states to reduce taxpayer burden. For example, in one district the state and IRS can automatically transfer telephone taxpayer assistance calls to each other to respond to taxpayers more quickly and efficiently. Data limitations prevented us from ascertaining whether the existing mix of FedState projects has helped IRS toward meeting its goals of improving compliance, increasing efficiency, and reducing taxpayer burden. A project designed to increase compliance may also have the positive effect of reducing burden or increasing efficiency. An official in IRS’ Office of FedState Relations told us that four of the most common efforts have been taxpayer data tape exchanges, federal/state joint electronic filing programs, state refund offset programs, and the joint dyed diesel fuel program. Taxpayer data tape exchanges, which began in the 1960s, constitute one of the oldest FedState cooperative efforts. According to IRS, currently almost all states participate in tape and information exchanges. By exchanging tapes that include taxpayer return data, IRS and the states have been able to identify taxpayers who failed to file returns or who filed returns but owed more taxes. Although comprehensive data on the revenues collected through this effort have not been systematically tracked, the data collected by some states and IRS districts demonstrate that computer tape exchanges have increased revenues. For example, one state billed taxpayers for $37.5 million in 1990 state income taxes on the basis of data in IRS tapes that showed IRS adjustments to taxpayers’ federal taxes. The state billed those taxpayers who had failed to report and pay additional state income tax due as a result of the federal adjustments. The joint electronic filing effort—which was initiated in 1991 as a limited research test with the South Carolina Tax Commission—is an initiative among IRS and the states to allow taxpayers to simultaneously file state and federal returns electronically. According to IRS, 31 states will participate in the 1996 FedState Electronic Filing program. Electronic returns go to IRS, which then is to send the states their portions of the filing. While no systematic effort has been made to assess the benefits of joint electronic filing, IRS believes that joint filing increases efficiency because it encourages electronic filing and thus eliminates the costs of processing and storing paper returns. Also, according to IRS, electronic filing reduces administrative costs to both IRS and the states because mathematical errors are detected electronically and transcription errors are eliminated. Finally, IRS said that joint electronic filing reduces taxpayer burden by enabling taxpayers to submit their state and federal returns in a single electronic transmission, thus avoiding corresponding mathematical errors. The state refund offset program, also referred to as the State Income Tax Levy Program (SITLP), allows IRS to levy state tax refunds to fulfill federal tax debts. According to IRS, a levy is more efficient than other collection enforcement actions. The program has been in operation since 1985. According to an IRS official, 31 states participated in SITLP, which in 1995 netted IRS $81.7 million in due taxes. To increase the efficiency of its motor fuels compliance efforts, IRS is part of the dyed diesel fuel program, which was established in 1994 and involves sampling fuel in storage and vehicles to ensure that red-dyed fuel, which is tax free, is not used as taxable fuel on highways. According to IRS, 15 states have contracted with IRS to sample and test diesel fuel in vehicles used on highways. IRS believes the program has increased compliance. IRS’ preliminary data indicate that diesel fuel excise tax collections increased by about $1.2 billion, or 22.5 percent, from calendar year 1993 to 1994. In addition to these four common efforts, numerous efforts have been initiated at the IRS district and state levels. For example, IRS and 1 state revenue department conducted a joint video conference seminar linked to 19 locations statewide to inform tax practitioners of changes in the tax laws. IRS and the state hoped this combined video conference would (1) improve taxpayer service by informing a greater number of practitioners in more remote locations and (2) increase efficiency by reducing the amount of time and money IRS and state employees spent traveling to such seminars. Another IRS district and state tax agency targeted a localized group of nonfiling and underreporting self-employed taxpayers. To increase efficiency, IRS and the state tax agency each audited a segment of such taxpayers’ returns and assessed taxes, shared audit results, and based assessments on each other’s audits. According to an IRS official, this effort yielded approximately $5 million in state and federal taxes and added 400 filing taxpayers. Other examples of joint efforts are included in appendix II. While the FedState program offers opportunities for increasing taxpayer compliance, improving taxpayer service, reducing the burden on the taxpayer, and increasing the efficiency of tax administration, IRS has not developed an overall strategy to guide FedState projects to better assure the most efficient use of IRS resources. The Office of FedState Relations was established to foster and facilitate FedState cooperative efforts. The Director of the Office of FedState Relations has been responsible for planning and directing FedState efforts that involve the integration and coordination of IRS resources, and reviewing and evaluating FedState activities to ensure optimum results. However, IRS has not developed an overall strategy for the Office of FedState Relations to fulfill its purpose, to link FedState efforts with IRS’ overall agency goals and objectives, or to establish an evaluation mechanism for the program. Strategic planning at the program level offers a framework for tying agency goals and objectives with program-level actions. This helps to ensure that budget trade-offs at the program level are directly tied to the agency’s overall strategy. In the absence of such planning efforts, the agency will lack assurance that the individual programs in which it participates represent the best choices for achieving its overall goals and objectives. Currently, FedState efforts vary from state to state. While these variances generally reflect differences in state and regional operating agendas, they also underscore a weakness in IRS’ FedState efforts—namely, the lack of a centralized strategic planning function. Currently, no unit within IRS is responsible for providing a strategic framework for the projects. While IRS’ Office of FedState Relations is responsible for facilitating cooperative projects between IRS and the states, the office offers little guidance to help local units choose the most productive projects, nor does it help local units to determine whether their project efforts are helping IRS to achieve its strategic goals. For example, such guidance to local units might identify FedState efforts that are most beneficial to both IRS and state offices, efforts that link strategically to IRS’ main goals, and efforts that help ensure that IRS resources are most efficiently used. Absent such guidance, local units may be missing out on projects that offer greater benefits or operating projects that are not worthwhile. We found that most decisionmaking about FedState programs occurred at the district level, where IRS district and state officials worked together to identify and initiate projects. The local level was a natural decisionmaking location since participation by the district or state was voluntary and depended on the project. State and IRS officials told us that it is important to maintain the local focus of the efforts because of the variation in needs, resources, and taxpayer issues. According to the IRS district and state officials we interviewed, the level of FedState activity that existed between district IRS offices and state tax agencies was highly dependent on the working relationship between their respective managers and the top managers’ commitment to the FedState program. To assist in developing this working relationship, it seems to us that local districts and state agencies could benefit from guidance to help ensure that they are pursuing the FedState efforts that would benefit them the most. The Office of FedState Relations views its role as an advocate for the program and as a clearinghouse for project ideas. In addition, IRS officials said the Office of FedState Relations worked to develop legislation designed to make it easier for state revenue offices and IRS to engage in joint or reciprocal tax administration functions such as filing of returns and processing of returns and return information. The most recent version of the legislation was submitted to Congress in March 1996, and no subsequent action has been taken yet. The proposed legislation would authorize IRS to enter into tax agreements with the states and to delegate tax administration responsibilities and compensate each other for activities. As a clearinghouse, the Office of FedState Relations provided information to districts on existing and proposed projects, primarily through a catalog that included descriptions of FedState projects provided by the IRS districts themselves. According to IRS, the catalog was not intended to be comprehensive and did not include information on such things as status, costs, and results. IRS has not developed a strategic framework for achieving FedState’s purpose of facilitating and fostering cooperative efforts between IRS and the states. Without a strategic plan, IRS cannot be assured that FedState resources are being focused on those projects that will contribute most to IRS’ mission. Nor has the office set performance goals to guide cooperative efforts or determine how well its programs are doing. Setting performance goals is an integral part of managing for results and is a current organizational emphasis in IRS. IRS also has not provided guidance as to what types of FedState efforts have the greatest potential to further IRS’ mission. The Office of FedState Relations has undergone several organizational and staff changes. As a result, the office has not had the benefit of stable and continuous support and direction in terms of resources and staffing. In the past 2 years, at least six different individuals have held the position of FedState director or acting director and the organizational location of the office has changed twice. According to IRS officials, the size of the staff has fluctuated between 5 and 21 people. Further, the director position has been downgraded from a senior executive position to a GS-15 position. According to an official in the Office of FedState Relations, the current staff comprises 19 individuals, most at the GS-12 level or higher. Four of these staff persons were transferred to the Office of FedState Relations because their former offices were reorganized or their positions were abolished. IRS officials said that further staffing changes may take place. Neither IRS nor the states have systematically monitored or assessed the results of individual FedState projects. With performance-based data, IRS national and district offices could make more informed decisions on resource allocations and program priorities. Such data might also provide support for IRS’ national office to encourage broader participation by IRS district and state revenue offices. Currently, IRS does not have the project information needed to ensure that the FedState program is managed in a way that maximizes resource investments. In 1994, IRS compiled a FedState catalog of projects that listed more than 280 proposed or actual FedState efforts. FedState officials told us that this listing was not comprehensive. Further, the FedState office generally does not have information on the status of these projects, such as project implementation dates, the resources required to operate the projects, or project benefits. Quantitative, results-focused data have been collected for some FedState projects. Of the 126 projects we reviewed in 6 districts, data to monitor or assess the projects were collected on 31, or 25 percent. Further, none of the 126 projects we reviewed was evaluated in terms of total project costs. Of the 31 measured projects, few provided measures that linked project outcomes to IRS’ main goals of increasing compliance, reducing burden, and improving tax administration efficiency. The most common measure was the amount of additional revenue generated by the projects. For the remaining 95 projects, success was measured intuitively or projects were just assumed to provide benefits. IRS has recognized the need to evaluate the results of FedState projects. However, the results of these efforts have been limited. For example, in 1994, a former Director of the Office of FedState Relations said the office planned to create an information-sharing cost model to show the benefits of the FedState program and generate greater interest in FedState projects among the states. However, this model has not been created. According to the current Director of the Office of FedState Relations, the project was terminated due to a lack of resources. In another effort to evaluate FedState projects, in 1994 the Office of FedState Relations instituted a best-practices approach that encouraged local offices to submit information on their most successful FedState projects. The office developed guidance for local offices to use in describing projects, resources required, and results achieved. Thus far, only two projects have been selected as best practices, according to IRS officials. IRS has sent descriptions of the projects, along with implementation guidelines, to its local offices nationwide in the hope that they will be widely adopted. According to IRS officials, the Office of FedState Relations also planned to work with field FedState staff to complete plans by November 1995 to measure the benefits of selected FedState projects. According to an IRS official, few measurement plans have been submitted because FedState field staff were overwhelmed by the demands of measuring projects, coordinating ongoing FedState projects, and handling staffing changes and duties related to IRS’ reorganization. IRS did not provide more specific details on the nature of the issues and the impact of staffing and organizational changes on IRS’ ability to measure program results. In addition, IRS’ Western Region Internal Audit group reviewed federal and state information sharing in the Western Region. In May 1994, it reported that district management could not accurately identify and track the costs or accomplishments of FedState activities and that current systems did not capture this type of data. The review also found that without accurate tracking techniques, the districts could not address the effectiveness of FedState projects in reducing taxpayer burden, increasing compliance, and improving quality. In response, the Western Region’s Chief Compliance Officer created a working group to develop a cost/benefit model to measure the success of FedState projects. They later rolled this project into a National Office Research and Analysis plan. The FedState program’s ability to contribute significantly to IRS’ strategic objectives relies considerably on the participation of IRS districts and states. Due to the voluntary nature of the program, the quality of the relationships among the states and IRS district offices is a critical component of the decision to initiate projects. However, because of IRS’ latest reorganization, some states have voiced concerns about the possible deterioration of FedState relationships that have developed over the years. In May 1995, IRS announced a planned reorganization of its field office structure to reduce the number of IRS district offices from 63 to 33 and the number of regions from 7 to 4 by the end of fiscal year 1996. Before the reorganization, each state had at least one district office. Along with a district director, most district offices had part-time FedState coordinators who acted as liaisons to the states. With the reduction in the number of districts, IRS plans to put the area covered by the districts to be eliminated under consolidated management of another district. IRS staff is to remain in locations that were formerly district offices; however, the district director and other management positions are to be eliminated. In our discussions with state officials, many expressed concern about the effect that reorganization would have on their relationships with IRS. To help better understand these concerns, we held a joint meeting with representatives from nine state tax agencies. Many participants told us that they placed a high premium on the personal commitment of top managers at IRS district offices. They also said that they viewed the good lines of communications that they had developed through ongoing personal contacts and close working relationships with their district IRS counterparts as being important to the success of FedState activities. The participants said that the elimination of district offices in some states may impede FedState cooperation because (1) there may be no IRS counterparts for state officials in those states that have lost IRS district offices and (2) the geographical distance between state offices and some district directors may tend to discourage the development of a close working relationship. In essence, these participants were concerned about the continuation of ongoing FedState projects and the prospect of future projects. In 1995, the IRS Transition Executive, responsible for overseeing IRS’ reorganization, produced a transition plan that, according to officials from the Office of FedState Relations, will be implemented. The plan addresses the states’ concerns by recommending to regional IRS commissioners that a full-time FedState coordinator and a full-time disclosure officer be established in each of the continuing district and regional offices. In the past, district FedState coordinators were not full-time positions; rather, FedState activities were typically considered a collateral duty of the district disclosure officer. Some IRS officials had expressed concern about disclosure officers being given the role of coordinator, since their primary responsibility is to safeguard data, not to look for ways to share it. According to IRS officials, the highest-level official remaining in each district office scheduled to be closed will be designated FedState liaison as a collateral duty. It is too early to assess whether the plan will address the states’ concerns because of the recency of the reorganization. FedState cooperative efforts provide IRS and the states with opportunities to increase taxpayer compliance, improve taxpayer service, reduce taxpayer burden, and improve the efficiency of tax administration activities. However, IRS has not provided the strategic framework, guidance, and performance goals for the FedState program that would enable it to take fuller advantage of these opportunities. Specifically, IRS’ Office of FedState Relations has not provided guidance to local IRS districts and states, and the level and types of efforts undertaken appear to rely primarily on the commitment of IRS district management and the state. It is important to maintain the local focus of the efforts because of the variation in needs, resources, and taxpayer issues. At the same time, data that identify best practices would better enable IRS to promote the practices’ adoption on a wider scale. Further, IRS has not developed performance goals for the FedState program and has not collected data on most programs to monitor or assess program progress and results. Consequently, IRS national and district offices do not have the information needed to manage and assess the FedState program as a whole and make informed decisions about individual FedState projects. As a result, IRS may be missing opportunities to target program efforts and maximize potential program benefits. Finally, some state tax officials are concerned that IRS’ reorganization of its district offices may impede or even end the long-standing relationships with IRS district officials that have made cooperative FedState projects possible. It is too early to determine what impact the reorganization will have on the program. To enhance opportunities for increased benefits from the FedState program we recommend that you develop and monitor, in conjunction with the states, implementation of a strategic framework that links FedState project objectives to IRS and state mission objectives; and establish performance goals and ways to monitor and assess program results. We requested comments on a draft of this report from you or your designated representatives. Responsible IRS officials, including the Director, Governmental Liaison and Disclosure, and the Director, Office of FedState Relations, provided comments and supplementary documents in a September 4, 1996, meeting and additional comments dated September 27, 1996. We have incorporated modifications in response to their comments in this report where appropriate. FedState officials emphasized that the conditions identified in our report related to the way the program operated before they took charge. They are in the process of making changes they think will improve the program and they said our concerns would be addressed in that process. In response to our recommendation to develop a strategic framework, Office of FedState Relations officials said they believed they had already undertaken important steps toward a strategic plan, in particular by establishing FedState plans and procedures in Spring 1996. By definition, they said, the program focuses on the identification, exploration, and implementation of innovative solutions to mutual challenges at the local level. Further, they commented that while they recognized the importance of strategic planning at the national level, IRS will continue to look to IRS executives to leverage these opportunities with their state counterparts at the local level. IRS officials said they have established plans and procedures that will link FedState project objectives to IRS and state mission objectives. For example, they told us they established the National FedState Steering Committee. Among other responsibilities, the Committee has been developing FedState policies and procedures to ensure that specific FedState goals are consistent with IRS goals. The Committee developed FedState project guidelines which were forwarded to IRS regional offices in August 1996. This guidance is responsive to our recommendation and should help IRS improve its program. The Office of FedState Relations also has been developing a “FedState Program Letter” for fiscal year 1997. According to IRS officials, the Program Letter will provide general guidance about the FedState program, its objectives, current priorities, and other information. FedState officials said the Program Letter will outline long-range objectives as well as set priorities for fiscal year 1997. Further, they commented that they have stabilized the management team and have filled director positions with permanent, top-level managers which should help to overcome concerns about the instability of the Office. We believe that IRS has taken important steps toward a strategic framework, but it is too early to assess the effectiveness of these steps because they were recently implemented or have not been finalized. IRS officials also agreed with our recommendation to establish performance goals and ways to monitor and assess program results. They said steps to improve in these areas have already been taken. For example, the Office of FedState Relations distributed guidance to district and service center FedState coordinators on how to report the results of individual FedState projects. The guidance requests that coordinators quarterly report information on their FedState projects, including baseline measures for new initiatives and specific results for ongoing projects. Also, in August 1996, the Office of FedState Relations provided FedState coordinators guidelines on how to develop projects and propose projects that might be replicated nationwide. Among other things, these guidelines request that coordinators specify how projects results are to be measured and how the measurements relate to the goals of the project. We believe that, when fully implemented, these steps may provide more of the information IRS needs to manage and assess the program. We are encouraged by the enthusiasm and commitment current IRS officials show for the FedState program. However, during our review various FedState officials have told us about plans or procedures to develop FedState program and project measures. Many of these were abandoned or were never fully realized. To be successful, IRS’ current plans to develop a strategic framework and measures must be fully implemented and supported by the appropriate IRS officials at the national and local levels. In a meeting on August 15, 1996, we obtained comments on a draft of this report from Federation of Tax Administrators (FTA) officials responsible for FedState-related issues, including the Executive Director and Government Affairs Associate. The officials generally agreed with our recommendations. However, the officials said that the strategic framework must allow enough flexibility for state taxing agencies and local IRS officials to decide which FedState projects they will pursue. Also, the officials said FTA conducted a study that showed the revenue benefits to the states from IRS’ taxpayer data tape exchange program. FTA issued its report on September 25, 1996. The head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken on these recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight not later than 60 days after the date of this report. A written statement must also be sent to the House and Senate Committee on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. We are sending copies of this report to interested congressional committees, including the Chairman and Ranking Minority Member of the House Committee on Ways and Means and its Subcommittee on Oversight, the Chairman and Ranking Minority Member of the Senate Finance Committee, the Secretary of the Treasury, and other interested parties. Copies will also be made available to others upon request. Major contributors to this report are listed in appendix III. Please contact me on (202) 512-9110 if you have any questions concerning the report. State income tax? Degree of conformity to federal income tax Adjusted gross income (AGI) Federal taxable income (FTI) Only interest and dividends are taxed (continued) State income tax? The development of materials and the provision of customer service by all functions. IRS and a state revenue agency opened a “New Business Assistance Center” to inform new business owners of their federal and state tax responsibilities and how to comply. The receipt and processing of tax returns, payments, and information documents, both paper and electronic. To prevent erroneous Earned Income Credit refunds, a state obtained a list from IRS of taxpayers with freezes on their accounts that they used to determine whether to also freeze a taxpayer’s account. Account adjustments to tax, penalties, and interest, including amended returns, taxpayer requests, claims, and service-initiated changes. In one state, after IRS audits a taxpayer’s return, it informs the taxpayer that any changes to federal tax liability may affect state tax liability and the taxpayer may be required to file an amended state tax return. The matching of information documents against tax returns and accounts to identify nonfilers. IRS obtained state tax filing records to identify taxpayers filing a state income tax return but not a federal income tax return. The selection and examination of income, excise, employment, employee plans/exempt organizations (EP/EO), and estate and gift returns to determine tax liability (including appellate review). Also includes EP/EO determinations. IRS and a state conducted a joint sweep of auto dealerships to determine whether they were filing IRS Form 8300s and reporting state sales tax for cash sales over $10,000. Includes all efforts to secure payment of tax liabilities. In some states, if a taxpayer owes both the IRS and state revenue agency, the taxpayer can go to either IRS or the state revenue agency and set up an installment agreement to resolve both accounts. Encompasses all civil and criminal investigative activities. Two IRS districts and a state department of revenue have a project to identify and conduct joint investigations of individuals who are filing fraudulent tax returns electronically. The development and maintenance of information systems, including telecommunications, systems security and privacy, and systems standards. A state department of revenue provides one IRS district with all information the state receives on fuel sales, purchases, licenses, and distributors’ reports. Using this information, the district created an automated database to promote and monitor compliance in the motor fuel industry. Financial, human resource, and asset management. In several states, IRS and the state revenue office share training resources. For example, an IRS district trained state revenue employees on federal corporate tax laws. (Table notes on next page) Ronald W. Jones, Evaluator-in-Charge Troy D. Thompson, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the status of the Internal Revenue Service's (IRS) FedState Cooperative Program, focusing on: (1) potential program benefits to taxpayers, IRS, and the states; (2) conditions that may impede program success; and (3) states' concerns on the impact of IRS reorganization on the program. GAO found that: (1) the potential benefits of the FedState program include increasing taxpayer compliance, reducing taxpayer burden, and improving the efficiency of tax administration functions; (2) the FedState joint electronic filing program reduces administrative costs for IRS and the states by detecting math errors and eliminating transcription errors; (3) taxpayer data tape exchanges enable IRS and the states to identify taxpayers who fail to file a return or who owe more taxes; (4) the state refund offset program allows IRS to levy state refunds to fulfill the federal tax debt; (5) IRS lacks a centralized, strategic plan for ensuring that the FedState program is achieving the agency's objectives; (6) IRS and the states do not have a system to monitor and assess the results of individual FedState projects; (7) IRS needs to establish performance-based criteria for the program so that district offices can make more informed decisions on resource allocations and program priorities; and (8) some states have expressed concern that the reorganization of IRS will have a negative impact on the FedState program.
During World War II, the U.S. government partnered with academic scientists in ad-hoc laboratories and research groups to meet unique research and development (R&D) needs of the war effort. These efforts resulted in technologies such as the proximity fuse, advanced radar and sonar, and the atomic bomb. Those relationships were later re-structured into federal research centers to retain academic scientists in U.S. efforts to continue advancements in technology, and by the mid-1960’s the term “federally funded research and development centers” was applied to these entities. Since that time, the U.S. government has continued to rely on FFRDCs to develop technologies in areas such as combating terrorism and cancer, addressing energy challenges, and tackling evolving challenges in air travel. For example, one of DOE’s laboratories was used to invent and develop the cyclotron, which is a particle accelerator that produces high energy beams, critical to the field of nuclear physics for the past several decades. Today, FFRDCs support their sponsoring federal agencies in diverse fields of study. For example, DOE sponsors the most FFRDCs—16 in total—all of which are research laboratories that conduct work in such areas as nuclear weapons, renewable energy sources, and environmental management. DHS recently established two FFRDCs: one to develop countermeasures for biological warfare agents and the other to provide decision makers with advice and assistance in such areas as analysis of the vulnerabilities of the nation’s critical infrastructures, standards for interoperability for field operators and first responders, and evaluating developing technologies for homeland security purposes. FFRDCs are privately owned but government-funded entities that have long-term relationships with one or more federal agencies to perform research and development and related tasks. Even though they may be funded entirely, or nearly so, from the federal treasury, FFRDCs are regarded as contractors not federal agencies. In some cases, Congress has specifically authorized agencies to establish FFRDCs. For example, the 1991 appropriation for the Internal Revenue Service authorized the IRS to spend up to $15 million to establish an FFRDC as part of its tax systems modernization program. According to the Federal Acquisition Regulation (FAR), FFRDCs are intended to meet special long-term research or development needs that cannot be met as effectively by existing in-house or contractor resources. In sponsoring an FFRDC, agencies draw on academic and private sector resources to accomplish tasks that are integral to the mission and operation of the sponsoring agency. In order to discharge responsibilities to their sponsoring agencies, the FAR notes that FFRDCs have special access, beyond that which is common for normal contractual relationships, to government and supplier data—including sensitive and proprietary data—and other government resources. Furthermore, the FAR requires FFRDCs to operate in the public interest with objectivity and independence, to be free of organizational conflicts of interest, and to fully disclose their affairs to the sponsoring agencies. FFRDCs may be operated by a university or consortium of universities; other nonprofit organizations; or a private industry contractor as an autonomous organization or a separate unit of a parent organization. Agencies develop sponsoring agreements with FFRDCs to establish their research and development missions and prescribe how they will interact with the agency; the agencies then contract with organizations to operate the FFRDCs to accomplish those missions. At some agencies the sponsoring agreement is a separate document that is incorporated into the contract, and at other agencies the contract itself constitutes the sponsoring agreement. The sponsoring agreement and contract together identify the scope, purpose, and mission of the FFRDC and the responsibilities of the contractor in ensuring they are accomplished by the FFRDC. Although the contract or sponsoring agreement may take various forms, the FAR requires FFRDC sponsoring agreements to contain certain key terms and conditions. For example, the agreement term may not exceed 5 years, but can be periodically renewed in increments not to exceed 5 years. Sponsoring agreements must also contain prohibitions against the FFRDCs competing with non-FFRDCs in response to a federal agency request for proposals for other than the operation of an FFRDC. The agreement also must delineate whether and under what circumstances the FFRDC may accept work from other agencies. In addition, these agreements may identify cost elements requiring advance agreement if cost-type contracts are used and include considerations affecting negotiation of fees where fees are determined appropriate by sponsors. The National Science Foundation (NSF), which keeps general statistics on FFRDCs, identifies the following types of FFRDCs: Research and development (R&D) laboratories: fill voids where in-house and private sector R&D centers are unable to meet core agency needs. These FFRDCs are used to maintain long-term competency in sophisticated technology areas and develop and transfer important new technology to the private sector. Study and analysis centers: used to provide independent analyses and advice in core areas important to their sponsors, including policy development, support for decision making, and identifying alternative approaches and new ideas on significant issues. Systems engineering and integration centers: provide support for complex systems by assisting with the creation and choice of system concepts and architectures, the specification of technical system and subsystem requirements and interfaces, the development and acquisition of system hardware and software, the testing and verification of performance, the integration of new capabilities, and continuous improvement of system operations and logistics. The NSF maintains a master list of the current FFRDCs and collects funding data from their agency sponsors on an annual basis. According to NSF data, R&D funding for FFRDCs has risen steadily across the federal government, increasing 40 percent from fiscal year 1996 to 2005, from $6.9 billion to $9.7 billion. (See fig. 1 below.) This does not represent the full amount of funding provided to FFRDCs by federal agencies, however, since it does not include non-R&D funding. Nevertheless, it is the only centrally reported information on federal funding for FFRDCs. For a list of the 38 FFRDCs currently sponsored by the U.S. government, see appendix II. The four agencies we reviewed use cost-reimbursement contracts with the organizations that operate their FFRDCs, and three of these agencies generally use full and open competition in awarding these contracts. While the agencies require that their FFRDCs be free from organizational conflicts of interest in accordance with federal regulations, only DOD and DOE have agencywide requirements that prescribe specific areas that FFRDC contractors must address to ensure their employees are free from personal conflicts of interest. DHS and HHS policies do not specifically prescribe areas that contractors must include to address these conflicts. Federal law and regulations require federal contracts to be competed unless they fall under specific exceptions to full and open competition. One such exception is awarding contracts to establish or maintain an essential engineering, research, or development capability to be provided by an FFRDC. While some agencies we reviewed awarded FFRDC contracts through other than full and open competition in the past, including sole-source contracts, three have generally used full and open competition in recent years. Starting in the mid-1990’s, DOE took steps to improve FFRDC laboratory contractors’ performance with a series of contracting reforms, including increasing the use of competition in selecting contractors for its labs. Subsequent legislation required DOE to compete the award and extension of contracts used at its labs, singling out the Ames Laboratory, Argonne National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, and Los Alamos National Laboratory for mandatory competition because their contracts in effect at the time had been awarded more than 50 years ago. In addition, according to DOE officials, the Los Alamos contract was competed due to performance concerns with the contractor, and Argonne West’s contract was competed to combine its research mission with that of the Idaho National Engineering and Environmental Laboratory to form the Idaho National Laboratory. DOE now routinely uses competitive procedures on contracts for its FFRDC laboratories unless a justification for the use of other than competitive procedures is approved by the Secretary of Energy. Of DOE’s 16 FFRDCs, DOE has used full and open competition in the award of 13 contracts, is in the process of competing one contract, and plans to compete the remaining two contracts when their terms have been completed. For the 13 contracts that have been competed, in 2 cases the incumbent contractor received the new contract award, in 8 cases a new consortium or limited liability corporation was formed that included the incumbent contractor, and in 3 cases a different contractor was awarded the contract. Other agencies also have used competitive procedures to award FFRDC contracts: HHS has conducted full and open competition on the contract for its cancer research lab since its establishment in 1972, resulting in some change in contractors over the years. Recently, however, HHS noncompetitively renewed the contract with the incumbent contractor. The last time it was competed, in 2001, HHS received no offers other than SAIC-Frederick, which has performed the contract satisfactorily since then. HHS publicly posted in FedBizOpps its intention to noncompetitively renew the operations and technical support contract with SAIC-Frederick for a potential 10-year period. Interested parties were allowed to submit capability statements, but despite some initial interest none were submitted. DHS competed the initial contract awards for the start up of its two FFRDCs, with the award of the first contract in 2004. DHS plans to compete the award of the next studies and analyses FFRDC contract this year. In contrast, DOD continues to award its FFRDC contracts on a sole-source basis under statutory exemptions to competition. In the early 1990s, a report by a Senate subcommittee and a Defense Science Board task force both criticized DOD’s management and use of its FFRDCs, including a lack of competition in contract award. This criticism mirrored an earlier GAO observation. GAO subsequently noted in a 1996 report, however, that DOD had begun to strengthen its process for justifying its use of FFRDCs under sole-source contracts for specific purposes. DOD plans to continue its sole-source contracting for the three FFRDC contracts that are due for renewal in 2008 and the six contracts to be renewed in 2010. All of the FFRDC contracts we reviewed were cost-reimbursement contracts, most of which provided for payments of fixed, award, or incentive fees to the contractor in addition to reimbursement of incurred costs. Fixed fees often are used when, according to the agencies we reviewed, the FFRDC will need working capital or other miscellaneous expense requirements that cannot be covered through reimbursing direct and indirect costs. Fixed fees generally account for a small percentage of the overall contract costs; for fiscal year 2007 fixed fees paid to the FFRDCs we reviewed vary from a low of about 0.1 percent to a high of 3 percent. Award or incentive fees, on the other hand, are intended to motivate contractors toward such areas as excellent technical performance and cost effective management. These types of performance-based fees ranged from 1 to 7 percent at the agencies we reviewed. Among agencies we reviewed, contract provisions on fees varied significantly: Most DOD contracts are cost-plus-fixed-fee, and DOD, as a general rule, does not provide award or incentive fees to its FFRDCs. DOD’s FFRDC management plan—its internal guidance document for DOD entities that sponsor FFRDCs—limits fees to amounts needed to fund ordinary and necessary business expenses that may not be otherwise recoverable under the reimbursement rules that apply to these types of contracts. For example, the FFRDC operator may incur a one-time expense to buy an expensive piece of needed equipment, but the government’s reimbursement rules require that this expense be recovered over several future years in accordance with an amortization schedule. DOD’s management plan indicates that fees are necessary in such instances to enable the contractor to service the debt incurred to buy the equipment and maintain the cash flow needed for the contractor’s business operations. DOD officials told us they scrutinize these fees carefully and do not always pay them. For example, the contract between DOD and the Massachusetts Institute of Technology (MIT), which operates the Lincoln Laboratory FFRDC, specifies that MIT will not receive such fees. DOE and DHS use fixed fees, performance-based fees, and award terms, which can extend the length of the contract as a reward for good performance. For example, Sandia Corporation, a private company that operates Sandia National Laboratories, receives both a fixed fee and an incentive fee, which for fiscal year 2007 together amounted to about $23.2 million, an additional 1 percent beyond its estimated contract cost. In addition, Sandia Corporation has received award terms that have lengthened its contract by 10 years. HHS provides only performance-based fees to the private company that operates its one FFRDC. Rather than receiving direct appropriations, most FFRDCs are funded on a project-by-project basis by the customers, either within or outside of the sponsoring agency, that wish to use their services by using funds allocated to a program or office. FFRDC contracts generally specify a total estimated cost for work to be performed and provide for the issuance of modifications or orders for the performance of specific projects and tasks during the period of the contract. Congressional appropriations conferees sometimes directed specific funding for some DHS and DOD FFRDCs in conference reports accompanying sponsoring agencies’ appropriations. For example, although according to DOD officials, 97 percent of its FFRDC funding comes from program or office allocations to fund specific projects, half of its FFRDCs receive some directed amounts specified in connection with DOD’s annual appropriations process. Specifically, for fiscal year 2008, the following DOD FFRDCs received conferee-directed funding in the DOD appropriations conference report: MIT Lincoln Laboratory Research Program, $30 million; the Software Engineering Institute, $26 million; the Center for Naval Analyses, $49 million; the RAND Project Air Force, $31 million; and the Arroyo Center, $20 million. In addition, DOD officials noted that the congressional defense committees sometimes direct DOD’s FFRDCs to perform specific studies for these committees through legislation or in committee reports. In fiscal year 2008, two DOD FFRDCs conducted 16 congressionally requested studies. As FFRDCs may have access to sensitive and proprietary information and because of the special relationship between sponsoring agencies and their FFRDCs, the FAR requires that FFRDC contractors be free from organizational conflicts of interest. In addition, we recently reported that, given the expanding roles that contractor employees play, government officials from the Office of Government Ethics and DOD believe that current requirements are inadequate to address potential personal conflicts of interest of contractor employees in positions to influence agency decisions. While each agency we reviewed requires FFRDC operators to be free of organizational conflicts of interest, DOD and DOE prescribe specific areas that FFRDC contractors must address to ensure their employees are free from personal conflicts of interest. The FAR states that an organizational conflict of interest exists when because of other interests or relationships, an entity is unable or potentially unable to render impartial assistance or advice to the government or the entity might have an unfair competitive advantage. Because sponsors rely on FFRDCs to give impartial, technically sound, objective assistance or advice, FFRDCs are required to conduct their business in a manner befitting their special relationship with the government, to operate in the public interest with objectivity and independence, to be free from organizational conflicts of interest, and to fully disclose their affairs to the sponsoring agency. Each sponsoring agency we reviewed included conflict-of-interest clauses in its sponsoring agreements with contractors operating their FFRDCs. For example, a DHS FFRDC contract includes a clause that specifically prohibits contractors that have developed specifications or statements of work for solicitations from performing the work as either a prime or first-tier subcontractor. In addition to organizational conflicts of interest requirements, DOD and DOE have specific requirements for their FFRDC contractors to guard against personal conflicts of interest of their employees. For purposes of this report, a personal conflict of interest may occur when an individual employed by an organization is in a position to materially influence an agency’s recommendations and/or decisions and who—because of his or her personal activities, relationships, or financial interests—may either lack or appear to lack objectivity or appear to be unduly influenced by personal financial interests. In January 2007, the Under Secretary of Defense (Acquisition, Technology, and Logistics) implemented an updated standard conflict-of-interest policy for all of DOD’s FFRDCs that requires FFRDC contractors to establish policies to address major areas of personal conflicts of interest such as gifts, outside activities, and financial interests. The updated policy and implementing procedures now are included in all DOD FFRDC sponsoring agreements and incorporated into the DOD FFRDC operating contracts. This action was prompted by public and congressional scrutiny of a perceived conflict of interest by the president of a DOD FFRDC who then voluntarily resigned. As a result, DOD’s Deputy General Counsel (Acquisition and Logistics) reviewed the conflict of interest policies and procedures in place at each of its FFRDCs and determined that although sponsoring agreements, contracts, and internal policies were adequate, they should be revised to better protect DOD from employee-related conflicts. DOD’s revised policy states that conflicts of interest could diminish an FFRDC’s objectivity and capacity to give impartial, technically sound, objective assistance or advice, which is essential to the research, particularly with regard to FFRDCs’ access to sensitive information. Therefore, the policy provides that FFRDC conflict of interest policies address such issues as gifts and outside activities and requires an annual submission of statements of financial interests from all FFRDC personnel in a position to make or materially influence research findings or recommendations that might affect outside interests. DOE’s FFRDCs, which operate under management and operating (M&O) contracts—a special FAR designation for government-owned, contractor- operated facilities such as DOE’s—have additional provisions for addressing personal conflicts of interest. The provisions address such areas as reporting any outside employment that may constitute a personal conflict of interest. In addition, the National Nuclear Security Administration (NNSA), which sponsors three of DOE’s FFRDCs, is planning to implement additional requirements in its laboratory contracts later this year requiring contractors to disclose all employee personal conflict of interests, not just outside employment as is currently required. An NNSA procurement official noted that other personal conflict of interests may include any relationship of an employee, subcontractor employee, or consultant that may impair objectivity in performing contract work. NNSA officials stated that it plans to share the policy with the DOE policy office for potential application across the department. Currently, DHS and HHS policies do not specifically prescribe areas that contractors must include to address employees personal conflicts. However, DHS officials stated that they provided guidance to the two contractors that operate DHS’s FFRDCs to implement requirements to address some of their employees’ personal conflicts with DHS’s interests. In addition, both DHS and HHS FFRDC contractors provide that their staff avoid or disclose financial interests or outside activities that may conflict with the interests of the company. For example, the contractor operating the FFRDC for HHS requires about 20 percent of its employees to report activities that may constitute a conflict with the company’s interests, but allows the bulk of its staff to self-determine when they need to report. In May 2008, we reported that officials from the Office of Government Ethics expressed concerns that current federal requirements and policies are inadequate to prevent certain kinds of ethical violations on the part of contractor employees, particularly with regard to financial conflicts of interest, impaired impartiality, and misuse of information and authority. The acting director identified particular concerns with such conflicts of interest in the management and operations of large research facilities and laboratories. Our report noted that DOD ethics officials had generally the same concerns. Therefore, we recommended that DOD implement personal conflict-of-interest safeguards—similar to those for federal employees—for certain contractor employees. Sponsoring agencies take various approaches in exercising oversight of their FFRDCs. The agencies determine appropriateness of work conducted by their FFRDCs; perform on-going and annual assessments of performance, costs and internal controls; and conduct comprehensive reviews prior to renewing sponsoring agreements. Each agency develops its own processes in these areas, and no formal interagency mechanisms exist to facilitate the sharing of FFRDC oversight best practices. To ensure work remains within each FFRDCs purpose, mission, scope of effort, and special competency, sponsoring agencies develop and approve annual research plans for the FFRDCs and review and approve FFRDC work assigned on a project-by-project basis. While the majority of each FFRDC’s work is done for its sponsoring agency, FFRDCs may perform work for other institutions, subject to sponsoring agency approval. Officials at DOD, DOE, and DHS identified the processes they use to develop annual research plans that describe each FFRDC’s research agenda. For example, DHS designates an executive agent to ensure that its FFRDC is used for the agency’s intended purposes. Each year DHS develops a research plan that is reviewed and approved by the executive agent, including any subsequent changes. DHS also uses an Advisory Group to ensure that its FFRDCs produce work consistent with the sponsoring agreement. DOD has a similar mechanism for approving the annual research plan for its Lincoln Laboratory FFRDC. This FFRDC has a Joint Advisory Committee that annually reviews and approves the proposed research plan. Members of this committee include representatives from the various DOD services—e.g., Air Force, Army, and Navy—who are the users of the laboratory’s R&D capabilities. Of the four agencies included in our review, only HHS does not create a separate annual research plan for its FFRDC. Instead, the work at HHS’ FFRDC is guided by the National Cancer Institute’s overall mission, which is described in its annual budgetary and periodic strategic planning documents. In determining the proposed research plan, DOD must abide by congressionally set workload caps. These caps were imposed in the 1990’s in response to concerns that DOD was inefficiently using its FFRDCs, and therefore, each fiscal year Congress sets an annual limitation on the Staffyears of Technical Effort (STE) that DOD FFRDCs can use to conduct work for the agency. The STE limitations aim to ensure that (1) work is appropriate and (2) limited resources are used for DOD’s highest priorities. Congress also sets an additional workload cap for DOD’s FFRDCs for certain intelligence programs. Once DOD receives from Congress the annual total for STEs, then DOD’s Office of the Undersecretary of Acquisition, Technology and Logistics allocates them across DOD’s FFRDCs based on priorities set forth in the annual research plan developed by each FFRDC. DOD officials observed that while the overall DOD budget has increased about 40 percent since the early 1990s, the STE caps have remained steady, and therefore, DOD must turn aside or defer some FFRDC-appropriate work to subsequent years. Although the majority of work that DOD’s FFRDCs conduct is subject to these limitations, the work that DOD FFRDCs conduct for non-DOD entities is not subject to these caps. Each sponsoring agency also reviews and approves tasks for individual FFRDC projects to make sure that those tasks (1) are consistent with the core statement of the FFRDC and (2) would not constitute a “personal service” or inherently governmental function. Listed below are examples of procedures used by agencies included in our review to approve tasks for individual projects: DOD sponsors generally incorporate in their sponsoring agreement guidelines for performance of work by the FFRDC. The work is screened at various levels for appropriateness, beginning with FFRDC clients who request the work, then program and contract managers, and then it is reviewed and approved as well by the primary sponsor. In some cases, projects are entered into a computer-based tool, which the Air Force has developed to determine and develop its overall requirements for that year. The tool is intended to assist the Air Force in prioritizing requests for its FFRDC and in ensuring that work requested is in accordance with guidelines and that potential alternative sources have been considered. DOE FFRDCs must document all DOE-funded projects using work authorizations to help ensure that the projects are consistent with DOE’s budget execution and program evaluation requirements. In addition, DOE uses an independent scientific peer-review approach—including faculty members and executives from other laboratories—at several of its FFRDC laboratories to ensure the work performed is appropriate for the FFRDC and scientifically sound. In some cases, DOE’s Office of Science holds scientific merit competitions between national laboratories (including FFRDCs), universities, and other research organizations for some R&D funding for specific projects. HHS uses an automated “yellow task” system to determine if work is appropriate for its FFRDC, and several officials must approve requests for work, including the government contracting officer and overseeing project officer for the FFRDC, with reference to a set of criteria. This agency requires a concept review by advisory boards for the various HHS institutes to ensure the concept is appropriate for the FFRDC and meets its mission or special competency. DHS requires certain officials at its sponsoring office to conduct a suitability review using established procedures for reviewing and approving DHS-sponsored tasks. This review is required under DHS’s Management Directive for FFRDCs. FFRDCs are required to have their sponsors review and approve any work they conduct for others, and the four agencies included in our review have policies and procedures to do so. FFRDCs may conduct work for others when required capabilities are not otherwise available from the private sector. This work for others can be done for federal agencies, private sector companies, and local and state governments. The sponsoring agency of an FFRDC offers the work for others, with full costs charged to the requesting entity, to provide research and technical assistance to solve problems. At laboratory FFRDCs, work for others can include creating working models or prototypes. All work placed with the FFRDC must be within the purpose, mission, general scope of effort, or special competency of the FFRDC. Work for others is considered a technology transfer mechanism, which helps in sharing knowledge and skills between the government and the private sector. Under work for others, according to DOD officials and federal regulation, the title to intellectual property generally belongs to the FFRDC conducting the work, and the government may obtain a nonexclusive, royalty-free license to such intellectual property or may choose to obtain the exclusive rights. As required by FAR, sponsoring agreements or sponsoring agencies we reviewed identified the extent to which their FFRDCs may perform work for other than the sponsors (other federal agencies, state or local government, nonprofit or profit organizations, etc.) and the procedures that must be followed by the sponsoring agency and the FFRDC. In addition, according to agency officials FFRDCs have a responsibility to steer inquiries about potential research for other entities to their primary sponsor’s attention for approval. Agency officials stated that they work with their FFRDCs when such situations arise. DOE’s Office of Science established a “Work for Others Program” for all of its FFRDC laboratories. Under this program, the contractor of the FFRDC must draft, implement, and maintain formal policies, practices, and procedures, which must be submitted to the contracting officer for review and approval. In addition, DOE may conduct periodic appraisals of the contractor’s compliance with its Work for Others Program policies, practices, and procedures. For DOE’s National Nuclear Security Administration (NNSA), officials reported that the work for others process at the Sandia National Laboratories requires DOE approval before the Sandia Corporation develops the proposed statement of work, which is then sent to DOE’s site office for review and approval. For DHS, each FFRDC includes the work for others policy in its management plan. For example, one management plan states that the FFRDC may perform work for others and that such work is subject to review by the sponsoring agency for compliance with criteria mutually agreed upon by the sponsor and the FFRDC contractor. The DHS FFRDC laboratory director said he routinely approves any work-for-others requests but gives first priority to the DHS-sponsored work. The sponsor for this FFRDC also periodically assesses whether its work for others impairs its ability to perform work for its sponsor. HHS and DOD also have work-for-others programs for the FFRDCs they sponsor. For example, at HHS’s FFRDC the program is conducted under a bilateral contract between the entity that is requesting the work and the FFRDC to perform a defined scope of work for a defined cost. This agency developed a standard Work for Others Agreement for its FFRDC, the terms and conditions of which help ensure that the FFRDC complies with applicable laws, regulations, policies, and directives specified in its contract with the HHS. Some agency sponsors report that work for others at their FFRDCs has grown in the past few years. For example, DOE officials said work for others at the Sandia National Laboratories related to nanotechnologies and cognitive sciences has grown in the last 3 years. As shown in table 1, the amount of work for others by FFRDCs since fiscal year 2001 has increased for many of the FFRDCs included in our review. While funding for work for others has increased, some agencies in our review reported limiting the amount of work for others their FFRDCs conduct. For example, DOE’s Office of Science annually approves overall work-for-others funding levels at its laboratories based on a request from the laboratory and recommendation from the responsible site office. Any work-for-others program that is above 20 percent of the laboratory’s operating budget, or any request that represents a significant change from previous year’s work-for-others program will be reviewed in depth before the approval is provided. Similarly, DOE officials limit commitments to conduct work for others at the National Renewable Energy Laboratory’s to about 10 percent of the laboratory’s total workload. In addition to ensuring work is appropriate for their FFRDCs, the four sponsoring agencies in our case study regularly review the contractors’ performance in operating the FFRDCs, including reviewing and approving costs incurred in operations and internal control mechanisms. Agency performance evaluations for FFRDC contractors vary, particularly between those that incorporate performance elements into their contracts and those that do not. Furthermore, contracting officers at each agency regularly review costs to ensure that they are appropriate, in some cases relying on audits of costs and internal controls to highlight any potential issues. All four agencies conduct at least annual reviews of the performance of their FFRDCs and contractors. At three agencies, the outcomes of these reviews provide the basis for contractors to earn performance-based incentives or awards. Specifically, DOE, HHS, and DHS provide for award fees to motivate contractors toward excellence in high performance, and contractors operating FFRDCs for DOE and DHS may earn additional contract extensions by exceeding performance expectations. DOE uses a performance-based contracting approach with its FFRDCs, which includes several mechanisms to assess performance. First, DOE requires contractors to conduct annual self-assessments of their management and operational performance. Also, contracting officers conduct annual assessments of the performance of the FFRDC contractor, relying in part on user satisfaction surveys. All of this input contributes to each lab’s annual assessment rating. For example, Sandia National Laboratories, operated by Sandia Corporation (a subsidiary of Lockheed Martin) received an overall rating of “outstanding” for fiscal year 2007 and was awarded 91 percent of its available award fee ($7.6 million of a possible total fee of $8.4 million). DOE noted that Sandia National Laboratories’ scientific and engineering support of U.S. national security was an exceptional performance area. DOE publishes such “report cards” for its laboratories on the internet. DOE includes detailed performance requirements in each contract in a Performance Evaluation and Measurement Plan that is organized by goals, objectives, measures, and targets. The DOE Office of Science mandates that each of its ten FFRDC laboratories establish the same eight goals in each FFRDC’s contractual plan. For example, the Ernest Orlando Lawrence Berkeley National Laboratory, operated by the University of California, received high ratings in providing efficient and effective mission accomplishment and science and technology program management. These ratings resulted in an award of 94 percent or $4.2 million of the total available fee of $4.5 million. HHS, which also uses performance-based contracting, has identified certain designated government personnel to be responsible for evaluation of the FFRDC contractor. This review process includes different levels of reviews, from coordinators who review performance evaluations to an FFRDC Performance Evaluation Board, which is responsible for assessing the contractor’s overall performance. The board rates each area of evaluation based on an established Performance Rating System to determine the amount of the contractor’s award fee. In fiscal year 2007, the National Cancer Institute at Frederick, operated by Science Applications International Corporation-Frederick (a subsidiary of Science Applications International Corporation), received 92 percent of its available award fee or $6.9 million of a possible $7.4 million. Similar to the other agencies, DHS regularly conducts performance reviews throughout the life cycle of its FFRDC contract. This includes program reviews as described in the sponsoring agreement, midyear status reviews, technical progress reports, monthly and quarterly reports, and annual stakeholder surveys to ensure the FFRDC is meeting customer needs. DHS also drafts a multiyear improvement plan and collects performance metrics as evidence of the FFRDC’s performance. For fiscal year 2007, Battelle National Biodefense Institute, operating the National Biodefense Analysis and Countermeasures Center, received 82 percent of its performance-based award fee amounting to $1.4 million. According to DHS officials, Analytic Services, Inc., which operates the Homeland Security Institute, received a fixed fee of about 2 percent or approximately $.68 million for fiscal year 2007. DOD conducts annual performance reviews and other internal reviews, such as conducting periodic program management reviews and annual customer surveys to monitor the performance of its FFRDCs in meeting their customers’ expectations. As part of this review process, major users are asked to provide their perspectives on such factors as the use and continuing need for the FFRDC, and how these users distinguish work to be performed by the FFRDC from work to be performed by others. According to DOD, these performance evaluations provide essential input to help it assess the effectiveness and efficiency of the FFRDC’s operations. Typically the performance reviews obtain ratings from FFRDC users and sponsors on a variety of factors including the quality and value of the work conducted by the FFRDCs, as well as its ability to meet technical needs, provide timely and responsive service, and manage costs. Federal regulations, policies, and contracts establish various cost, accounting, and auditing controls that agencies use to assess the adequacy of FFRDC management in ensuring cost-effective operations and ensure that costs of services being provided to the government are reasonable. Sponsors of the FFRDCs we reviewed employ a variety of financial and auditing oversight mechanisms to review contractors’ management controls, including incurred cost audits, general financial and operational audits, annual organizational audits, and audited financial statements. These mechanisms differ, depending on the agencies involved and the type of organization operating the FFRDCs. Under cost-reimbursement contracts, the costs incurred are subject to cost principles applicable to the type of entity operating the FFRDC. Most FFRDC contracts we examined include a standard clause on allowable costs that limits contract costs to amounts that are reasonable and in compliance with applicable provisions of the FAR. Under the FAR, contracting officers are responsible for authorizing cost-reimbursement payments and may request audits at their discretion before a payment is made. In addition, when an allowable cost clause is included in a contract, the FAR requires that an indirect cost rate proposal be submitted annually for audit. At DOD, the Defense Contract Audit Agency (DCAA) generally performs both annual incurred cost audits and close-out audits for completed contracts and task orders at the end of an FFRDC’s 5-year contract term. The audit results are included in the comprehensive review of DOD’s continued need for its FFRDCs. DCAA also performs these types of audits for DHS’s FFRDCs. At DOE, the Office of the Inspector General is responsible for incurred cost audits for major facilities contractors. At HHS, officials stated that while the contracting officer for its FFRDC regularly reviews the incurred costs, no audits of these costs have been performed. Agencies and FFRDC contractors also conduct financial and operational audits in addition to incurred cost audits. DOE relies primarily upon FFRDC contractors’ annual internal audits rather than on third-party monitoring through external audits. These internal audits are designed to implement DOE’s Cooperative Audit Strategy—a program that partners DOE’s Inspector General with contractors’ internal audit groups to maximize the overall audit coverage of M&O contractors’ operations and to fulfill the Inspector General’s responsibility for auditing the costs incurred by major facilities contractors. This cooperative audit strategy permits the Inspector General to make use of the work of contractors’ internal audit organizations to perform operational and financial audits, including incurred cost audits, and to assess the adequacy of contractors’ management control systems. DHS and DOD generally rely on audits performed by those agencies, a designated audit agency, or an accounting firm, though their FFRDC contractors usually perform some degree of internal audit or review function as part of their overall management activity. In addition, all nonprofits and educational institutions that annually expend more than $500,000 in federal awards—including those that operate FFRDCs—are subject to the Single Audit Act which requires annual audits of: (1) financial statements, (2) internal controls, and (3) compliance with laws and regulations. We have previously reported these audits constitute a key accountability mechanism for federal awards and generally are performed by independent auditors. At DOD, for example, DCAA participates in single audits normally on a “coordinated basis”—at the election of the organization being audited—with the audited organization’s independent public accountant. The financial statements, schedules, corrective action plan, and audit reports make up the single audit package, which the audited organization is responsible for submitting to a federal clearing house designated by OMB to receive, distribute, and retain. DOD’s Office of Inspector General, for example, as a responsible federal agency, receives all single audit submissions for nonprofits and educational institutions that operate DOD’s FFRDCs. These audit results are employed by DOD as partial evidence of its FFRDCs’ cost- effectiveness and incorporated in the 5-year comprehensive reviews. These annual single audits for nonprofit and educational FFRDC contractors are a useful adjunct to other cost, accounting, and auditing controls discussed previously, designed to help determine contractor effectiveness, efficiency, and accountability in the management and operation of their FFRDCs. Private contractors that publicly trade their securities on the exchanges— including those that operate FFRDCs—are registered with the Securities and Exchange Commission (SEC) and are required to file audited financial statements with the SEC. These audited statements must be prepared in conformity with generally accepted accounting principles (GAAP) and securities laws and regulations, including Sarbanes-Oxley, that address governance, auditing, and financial reporting. These financial statements are designed to disclose information for the benefit of the investing public, not to meet government agencies’ information needs. Accordingly, SAIC and Lockheed—private contractors that manage National Cancer Institute at Frederick and Sandia National Laboratories respectively—prepare audited financial statements for their corporate entities, but do not separately report information on their individual FFRDCs’ operations. Finally, even though financial statements are not required by university and nonprofit sponsored FFRDCs, some of the FFRDCs in agencies we reviewed have audited financial statements prepared solely for their own operations. DOD’s Aerospace and DHS’s HSI and NBACC are examples. Most others’ financial operations, however, are included in the audited financial statements of their parent organizations or operating contractor. Some, like MITRE, which manages not only DOD’s C3I FFRDC but also two others (one for the Federal Aviation Administration and one for the Internal Revenue Service), provides supplemental schedules, with balance sheets, revenues and expenses, and sources and uses of funds for all three FFRDCs. Others, like the Institute for Defense Analyses, which also operates two other FFRDCs in addition to the Studies and Analyses Center for DOD, provide only a consolidated corporate statement with no information on specific FFRDCs. The FAR requires that a comprehensive review be undertaken prior to extending a sponsoring agreement for an FFRDC. We found that the four agencies in our case study were conducting and documenting these reviews, but noted that implementation of this requirement by each agency is based on its own distinct management policies, procedures, and practices. During the reviews prior to agreement renewal, sponsoring agencies should include the following five areas identified by the FAR examination of the continued need for FFRDC to address its sponsor’s technical needs and mission requirements; consideration of alternative sources, if any, to meet those needs; assessment of the FFRDC’s efficiency and effectiveness in meeting the sponsor’s needs, including objectivity, independence, quick response capability, currency in its field(s) of expertise, and familiarity with the sponsor; assessment of the adequacy of FFRDC management in ensuring a cost- determination that the original reason for establishing the FFRDC still exists and that the sponsoring agreement is in compliance with FAR requirements for such agreements. DOD sponsoring offices begin conducting detailed analyses for each of the five FAR review criteria approximately 1 to 2 years in advance of the renewal date. As DOD has received criticism in the past for its lack of competition in awarding FFRDC contracts, it now conducts detailed and lengthy comprehensive reviews prior to renewing FFRDC sponsoring agreements and contracts with incumbent providers. DOD’s FFRDC Management Plan lays out procedures to help provide consistency and thoroughness in meeting FAR provisions for the comprehensive review process. DOD procedures require, and the comprehensive reviews we examined generally provided, detailed examinations of the mission and technical requirements for each FFRDC user, and explanations of why capabilities cannot be provided as effectively by other alternative sources. For example, DOD convened a high level, independent Technical Review Panel to review whether Lincoln Laboratory’s research programs were within its mission as well as whether the research was effective, of high technical quality, and of critical importance to DOD. The panel— composed of a former Assistant Secretary of the Air Force, a former president of another FFRDC, former senior military officers, and a high level industry representative—found that no other organizations had the capacity to conduct a comparable research program. In addition, DOD sponsors use information from annual surveys of FFRDC users that address such performance areas as cost effectiveness and technical expertise. Determinations to continue or terminate the FFRDC agreement are made by the heads of sponsoring DOD components (e.g., the Secretary of the Army or Air Force) with review and concurrence by the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics. DOE has a documented comprehensive review process that explicitly requires DOE sponsors to assess the use and continued need for the FFRDC before the term of the agreement has expired. DOE’s process requires that the review be conducted at the same time as the review regarding the decision to extend (by option) or compete its FFRDC operating contract. According to DOE’s regulation, the option period for these contracts may not exceed 5 years and the total term of the contract, including any options exercised, may not exceed 10 years. DOE relies on information developed as part of its annual performance review assessments as well as information developed through the contractor’s internal audit process to make this determination. The comprehensive review conducted prior to the most recent award of the contract to operate Sandia National Laboratories concluded that the FFRDC’s overall performance for the preceding 6 years had been outstanding. The Secretary of Energy determined that the criteria for establishing the FFRDC continued to be satisfied and that the sponsoring agreement was in compliance with FAR provisions. At DHS, we found that its guidance and process for the comprehensive review mirror many aspects of the DOD process. DHS has undertaken only one such review to date, which was completed in May 2008. As of the time we completed our work, DHS officials told us that the documentation supporting the agency’s review had not yet been approved for release. HHS—in contrast to the structured review processes of the other agencies—relies on the judgment of the sponsoring office’s senior management team, which reviews the need for the continued sponsorship of the FFRDC and determines whether it meets the FAR requirements. Agency officials stated that this review relies on a discussion of the FFRDC’s ability to meet the agency’s needs within the FAR criteria, but noted there are no formal procedures laid out for this process. The final determination is approved by the director of the National Cancer Institute and then the director of the National Institutes of Health. Some agencies have used the experiences of other agencies as a model for their own oversight of their FFRDCs. There is no formal mechanism, however, for sharing of best practices and lessons learned among sponsoring agencies. DHS officials have adopted several of DOD’s and DOE’s policies and procedures for managing FFRDCs to help their newly created FFRDCs gain efficiencies. DHS mirrored most of DOD’s FFRDC Management Plan, and officials have stated that the STE limitations for DOD could be a potentially useful tool for focusing FFRDCs on the most strategic and critical work for the agency. Also, DHS officials stated they have made use of DOE’s experience in contracting for and overseeing the operation of its laboratories, such as including a DOE official in the DHS process to select a contractor to operate its laboratory FFRDC. In addition, HHS officials said they are incorporating the DOE Blue Ribbon Report recommendation to set aside a portion of the incentive fee paid on their FFRDC contract to reward scientific innovations or research. The idea for the new contract is to base 80 percent of the available award fee in a performance period on operations and use the final 20 percent to reward innovation. HHS also may adopt the technique used by DOE of providing for contract extensions on the basis of demonstrated exceptional performance. To take advantage of others’ experiences, some FFRDCs sponsored by particular agencies have formed informal groups to share information. For example, DOD’s FFRDCs have formed informal groups at the functional level—Chief Financial Officers, Chief Technology Officers, and General Counsels—which meet periodically to share information on issues of common concern. In addition, the security personnel from the DOD FFRDC contractors meet once a year to discuss security and export control related issues. The contractor officials at Sandia National Laboratories said they share best practices for operating DOE’s laboratory FFRDCs at forums such as the National Laboratory Improvement Council. This Council was also mentioned in a DOE review of management best practices for the national laboratories as one of the few groups that deliberate a broader and more integrated agenda among laboratories. Despite these instances of information sharing within agencies and the acknowledgment by some officials of potential benefits in such knowledge sharing, no formal mechanisms exist for sharing information across agencies that sponsor and oversee FFRDCs. We reported in 2005 that federal agencies often carry out related programs in a fragmented, uncoordinated way, resulting in a patchwork of programs that can waste scarce funds, confuse and frustrate program customers, and limit the overall effectiveness of the federal effort. The report suggested frequent communication across agency boundaries can prevent misunderstandings, promote compatibility of standards, policies, and procedures, and enhance collaboration. For example, the Federal Laboratory Consortium for Technical Transfer was created to share information across national laboratories. This includes the FFRDC laboratories, but not the other types of FFRDCs. Some agency officials stated that there would be benefits to sharing such best practices. All federal agencies that sponsor FFRDCs are subject to the same federal regulations, and each agency included in our review has developed its own processes and procedures to ensure compliance and conduct oversight of its FFRDCs. For the most part the differences in approaches are not of great consequence. In at least one key area, however, the different approaches have the potential to produce significantly different results. Specifically, while all FFRDCs are required to address organizational conflicts of interest, only DOD and DOE have requirements that their FFRDC contractors address specific areas of personal conflicts of interest of their employees. In light of the special relationship that FFRDCs have with their sponsoring agencies, which often involves access to sensitive or confidential information, it is critical not only that the FFRDC as an entity but also that employees of the entity in positions to make or influence research findings or agency decision making be free from conflicts. Lacking such safeguards, the FFRDC’s objectivity and ability to provide impartial, technically sound, objective assistance or advice may be diminished. The two agencies with the most experience sponsoring FFRDCs have recognized this gap and have taken steps to address personal conflicts of interest. These steps are consistent with our recent recommendation to DOD that highlighted the need for personal conflicts- of-interest safeguards for certain contractor employees. The other agencies included in our review of FFRDCs could benefit from additional protections in the area of personal conflicts of interest. Currently, although DHS and HHS have policies that generally require their FFRDC contractors to implement such safeguards, they lack the specificity needed to ensure their FFRDC contractors will consistently address employees’ personal conflicts of interest. Conflict-of-interest requirements is only one of several areas where agencies that sponsor FFRDCs can learn from each other. Other areas include the use of effective and efficient oversight mechanisms such as incentive and award fees, obtaining competition, and conducting comprehensive reviews. In the absence of established knowledge-sharing mechanisms, however, agencies may be missing opportunities to enhance their management and oversight practices. Sharing knowledge among agencies that sponsor FFRDCs, as has been done informally in some instances, could help to ensure that agencies are aware of all the various tools available to enhance their ability to effectively oversee their FFRDCs. To ensure that FFRDC employees operate in the government’s best interest, we recommend that the Secretary of Homeland Security revise agency policies to address specific areas for potential personal conflicts of interest for FFRDC personnel in a position to make or materially influence research findings or agency decision making; and that the Secretary of Health and Human Services review agency policy regarding personal conflicts of interest for its sponsored FFRDC and revise as appropriate to ensure that this policy addresses all personnel in a position to make or materially influence research findings or agency decision making. To improve the sharing of oversight best practices among agencies that sponsor FFRDCs, we recommend that the Secretaries of Energy, Defense, Homeland Security, and Health and Human Services, which together sponsor the vast majority of the government’s FFRDCs, take the lead in establishing an ongoing forum for government personnel from these and other agencies that sponsor FFRDCs to discuss their agencies’ FFRDC policies and practices. Areas for knowledge sharing could include, for example, implementing personal conflicts of interest safeguards and processes for completing the justification reviews prior to renewing sponsoring agreements, among others. The Departments of Health and Human Services and Homeland Security concurred with our recommendation that they revise their conflict of interest policies. In addition, the departments of Defense, Energy, and Homeland Security all concurred with our recommendation to establish a forum to share best practices, while HHS is considering participation in such a forum. We received letters from Defense, Energy, and Health and Human Services, which are reprinted in appendixes III, IV, and V, respectively. In addition, the departments of Health and Human Services and Homeland Security provided technical comments, which we incorporated where appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this report. We then will provide copies of this report to the Secretaries of Defense, Energy, Health and Human Services and Homeland Security and other interested parties. In addition, this report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact us at (202) 512-4841 or woodsw@gao.gov or (202) 512-9846 or mittala@gao.gov. Key contributors to this report are acknowledged in appendix VI. To conduct this review, we chose a nongeneralizable sample of four of the nine federal agencies that sponsor FFRDCs: the departments of Energy (DOE) and Defense (DOD) have the longest histories in sponsoring federally funded research and development centers (FFRDCs) and sponsor the most—16 and 10, respectively; the Department of Homeland Security (DHS) has the 2 most recently established FFRDCs; the Department of Health and Human Services (HHS) has 1 FFRDC laboratory. From the collective 29 FFRDCs that those four agencies sponsor, we selected a nongeneralizable sample of 8 FFRDCs that represented variation among the type of operating contractor, including some operated by universities, some by nonprofits, and some by private industry. Within DOD and DHS, we chose FFRDCs that represent the variation among types these two agencies sponsor, while DOE and HHS only sponsor laboratory type FFRDCs. See appendix II for the FFRDCs included in our case study. To identify sponsors’ contracting and oversight methods at the four agencies in our case study, we interviewed federal department officials at each office that sponsors FFRDCs as well as offices that have contractor management roles and audit roles: (1) DOE’s Office of Science, National Nuclear Security Administration, Office of Energy Efficiency and Renewable Energy, Office of Environmental Management, Office of Nuclear Energy, and Office of Inspector General; (2) DOD’s departments of the Navy, Air Force, and Army; Office of the Secretary of Defense; Office of Acquisition, Technology, and Logistics; Defense Contract Audit Agency; and the Defense Contract Management Agency; (3) HHS’s National Institutes of Health, National Cancer Institute, and National Institute of Allergy and Infectious Diseases; and (4) DHS’s Directorate for Science and Technology. In addition, we obtained and analyzed federal and agency policies and guidance, contracts for the FFRDCs in our case studies and other supporting documentation such as performance and award fee plans, sponsoring agreements (when separate from contracts), and a variety of audits and reviews. While we did not assess the effectiveness of or deficiencies in specific agencies’ controls, we reviewed agency documentation on incurred cost audits, general auditing controls, single audits, and audited financial statements. We also obtained and analyzed funding data from sponsoring agencies as well as from the National Science Foundation (NSF), which periodically collects and reports statistical information regarding FFRDCs, such as their sponsors, category types, contractors, and funding. While we did not independently verify the data for reliability, we reviewed the NSF's methodology and noted that it reports a 100 percent response rate, no item nonresponse, and no associated sampling errors. For FFRDCs in our case study, we conducted on-site visits, interviewed key contractor administrative personnel, and obtained information and documentation on how they meet sponsoring agencies’ research needs and adhere to policy guidance. We observed examples of the types of research the FFRDCs conduct for their sponsors and obtained and analyzed documentation such as contractor ethics guidance and policies, performance plans, and annual reports. To obtain the perspective of the government contracting community, we met with high-level representatives of the Professional Services Council, a membership association for companies that provide services to the U.S. federal government. RAND Corp. Santa Monica, Calif. MITRE Corp. Bedford, Mass., and McLean, Va. Alexandria, Va. Institute for Defense Analyses Studies and Analyses Center Alexandria, Va. Institute for Defense Analyses Communications and Computing Center Alexandria, Va. Lexington, Mass. RAND Corp. Santa Monica, Calif. RAND Corp. Santa Monica, Calif. Pittsburgh, Penn. Argonne, Ill. Upton, N.Y. Brookhaven Science Associates, Inc. Ernest Orlando Lawrence Berkeley National Laboratory Berkeley, Calif. Universities Research Association, Inc. Batavia, Ill. Livermore, Calif. Golden, Colo. Midwest Research Institute; Battelle Memorial Institute; Bechtel National, Inc. Oak Ridge, Tenn. Richland, Wash. Princeton, N.J. (subsidiary of Lockheed Martin Corp.) Westinghouse Savannah River Co. Aiken, S.C. Stanford, Calif. Newport News, Va. National Cancer Institute at Frederick Frederick, Md. (wholly owned subsidiary of Science Applications International Corp) Analytic Services, Inc. Arlington, Va. National Biodefense Analysis & Countermeasures Center Frederick, Md. National Aeronautics and Space Administration Pasadena, Calif. National Astronomy and Ionosphere Center Arecibo, P.R. Boulder, Colo. Association of Universities for Research in Astronomy, Inc. Tucson, Ariz. Associated Universities, Inc. Charlottesville, Va. Science and Technology Policy Institute Washington, D.C. Center for Nuclear Waste Regulatory Analyses San Antonio, Tex. MITRE Corp. McLean, Va. MITRE Corp. McLean, Va. In addition to the individuals named above, key contributors to this report were John Neumann, Assistant Director; Cheryl Williams, Assistant Director; Sharron Candon; Suzanne Sterling; Jacqueline Wade; and Peter Zwanzig.
In 2006, the federal government spent $13 billion--14 percent of its research and development (R&D) expenditures--to enable 38 federally funded R&D centers (FFRDCs) to meet special research needs. FFRDCs--including laboratories, studies and analyses centers, and systems engineering centers--conduct research in military space programs, nanotechnology, microelectronics, nuclear warfare, and biodefense countermeasures, among other areas. GAO was asked to identify (1) how federal agencies contract with organizations operating FFRDCs and (2) agency oversight processes used to ensure that FFRDCs are well-managed. GAO's work is based on a review of documents and interviews with officials from eight FFRDCs sponsored by the departments of Defense (DOD), Energy (DOE), Health and Human Services (HHS), and Homeland Security (DHS). What GAO Recommends Federal agencies GAO reviewed use cost-reimbursement contracts with the organizations that operate FFRDCs, and three of the agencies generally use full and open competition to award the contracts. Only DOD consistently awards its FFRDC contracts on a sole-source basis, as permitted by law and regulation when properly justified. FFRDCs receive funding for individual projects from customers that require the FFRDCs' specialized research capabilities. Because FFRDCs have a special relationship with their sponsoring agencies and may be given access to sensitive or proprietary data, regulations require that FFRDCs be free from organizational conflicts of interest. DOD and DOE also have policies that prescribe specific areas that FFRDC contractors must address to ensure their employees are free from personal conflicts of interest. In a May 2008 report, GAO recognized the importance of implementing such safeguards for contractor employees. Currently, although DHS and HHS have policies that require their FFRDC contractors to implement conflicts-of-interest safeguards, these policies lack the specificity needed to ensure their FFRDC contractors will consistently address employees' personal conflicts of interest. Sponsoring agencies use various approaches in their oversight of FFRDC contractors, including: (1) Review and approval of work assigned to FFRDCs, or conducted for other agencies or entities, to determine consistency with the FFRDC's purpose, capacity, and special competency. In this process, only DOD must abide by congressionally imposed annual workload limits for its FFRDCs. (2) Conduct performance reviews and audits of contractor costs, finances, and internal controls. (3) Conduct a comprehensive review before a contract is renewed to assess the continuing need for the FFRDC and if the contractor can meet that need, based on annual assessments of contractor performance. Some agencies have adopted other agencies' FFRDC oversight and management practices. For example, DHS mirrored most of DOD's FFRDC Management Plan--an internal DOD guidance document--in developing an approach to FFRDC oversight, and DHS officials told us they learned from DOE's experience in selecting and overseeing contractors for laboratory FFRDCs. In addition, HHS plans to implement certain DOE practices, including rewarding innovation and excellence in performance through various contract incentives. While agency officials have acknowledged the potential benefits from sharing best practices, there is currently no formal cross-agency forum or other established mechanism for doing so.
Since the late 1950s, the birth rate of women aged 15 to 19 has decreased by about 41 percent overall. (One substantial increase started around the mid-1980s, then reversed itself in the early 1990s). The overall decrease in the teen birth rate parallels the overall decline in the U.S. birth rate as a whole, which has fallen 47 percent over the same time period. In contrast, the proportion of teen births outside of marriage has steadily increased over the same period (1957-95) from 14 percent to 78 percent of all teen births. (See fig. 1.) In 1995, the most recent year for which final data are available, the annual birth rate for women aged 15 through 19 was approximately 57 per thousand, compared with 96 per thousand in 1957 when the rate was at its peak. (See fig. 1.) There was a similar decline in the birth rate for all women over the same time period. The rate fell from 123 per thousand to 66 per thousand for women aged 15 through 44, a decline of 47 percent. While the overall trend in the teen birth rate has been downward, fluctuations have occurred. The most dramatic increase began in 1986 after the teen birth rate had reached 50 per thousand, the lowest point in 40 years. Between 1986 and 1991, the rate increased by 24 percent before starting to decline again. The percentage of births to unmarried teen women has increased substantially over the past several decades. In 1995, 78 percent of teen births were to unmarried women, compared with about 14 percent in 1957. This trend parallels a rise in births outside of marriage for the general population of women. Births to unmarried women of all ages had risen to 32 percent of the total in 1995 from about 5 percent in 1957. Teen birth rates in 1995 varied considerably by race, age, and geography. Rates for black and Hispanic teens were more than double those of white teens, and older teens constituted nearly two-thirds of teens who gave birth in 1995. Higher rates of teen births were found in the southern and southwestern states. In 1995, birth rates for Hispanic and black teens were 107 and 99 per thousand, respectively—more than twice the rate for white teens at 39 per thousand. Black and Hispanic women were also more likely to begin their families at younger ages. Compared with white teens, they were twice as likely to give birth by age 20. In 1995, the birth rates for teen women aged 18 to 19 were more than double the rates for those aged 15 to 17, regardless of race. (See table 1.) A similar pattern is evident among unmarried teens, where older teens had birth rates about double those of younger teenage women. In 1995, teen birth rates were the lowest in the northern states and highest in the South and the Southwest. (See fig. 2.) The states with the lowest rates had 45 or fewer births per thousand teen women while the states with the highest rates had 66 or more births per thousand. The 12 highest rates, which are concentrated in the southern and southwestern states, are 1.5 times the lowest rates in the northern states. A recent analysis of these patterns shows that teen birth-rate variations by geographic area correspond to the racial and ethnic distributions in the United States—higher numbers of blacks and Hispanics live in southern and southwestern states. A comparison of 1990 urban and rural teen birth rates for eight southeastern states shows that rural teen birth rates were higher than urban rates in three of four race and age categories. Among white women aged 15 to 17 and 18 to 19 and black women aged 18 to 19, those who lived in rural areas had higher birth rates than those who lived in urban areas. Only black women aged 15 through 17 had higher rates in urban areas. The study links the higher rural birth rates to a relatively lower use of abortion in rural areas. This profile provides descriptive characteristics of teen mothers who gave birth in the 1990s. Of teens who gave birth in 1995, almost half were white and most were age 18 or 19 and unmarried. About two-thirds of teen births were the result of an unintended pregnancy, and many births (21 percent) were a second or later child. About two-thirds of teen mothers graduated from high school; however, teen mothers graduated at substantially lower rates than teen women without children. (See table 2.) Furthermore, teen mothers reported drug use in the past month that was similar to that of other women their age. Also, 28 percent of white teen mothers reported smoking tobacco during their pregnancy, compared with 5 percent of black and Hispanic mothers. Almost half of the 512,000 births in 1995 (233,000) were to white teen mothers. The remainder included an almost even distribution of births between blacks (137,000) and Hispanics (122,000). (See fig. 3.) Births to teen mothers were predominantly to older teens. In 1995, about 60 percent of all children born to teens—married and unmarried—were born to 18-and 19-year-olds. Of the remaining 40 percent born to younger teenage women, most were born to women aged 15 to 17, with just slightly more than 12,000 born to women under age 15. (See table 2 and fig. 4.) About three-fourths of all teenage women who gave birth were unmarried at the time of the birth. Black teen mothers were predominantly unmarried (95 percent), while 68 percent of white and 68 percent of Hispanic teen mothers were unmarried at the time of the birth. In 1995, more than one-fifth of all teen births in the United States were to teenage women who had already given birth to at least one child. (See table 2.) The highest proportions of second or later births were among 18- and 19-year-olds. In this age group, 36 percent of black teen births, 30 percent of Hispanic teen births, and 21 percent of white teen births were a second or later child. The chance of the birth being a second or later birth was similar for all teens, regardless of race, age, or marital status. (See table 3.) A high percentage of births to teens in the United States result from unintended pregnancies. Between 1990 and 1995, 65 percent of births to teenage mothers were reported as unintended, whereas about one-third of all U.S. births were reported as unintended in that period. From 1990 to 1995, about 75 percent of births to black teen mothers, 67 percent to white teen mothers, and 46 percent to Hispanic teen mothers were reported as unintended. (See table 2 and fig. 5.) Generally, women who give birth in their teens have substantially lower high school graduation rates than those who do not. A recent education study shows that about 64 percent of teen mothers graduated from high school or earned a general equivalency diploma within 2 years after the time they would have graduated, compared with about 94 percent of teenage women who did not give birth. An older study similarly found that less than 60 percent of teen mothers graduated from high school by age 25, compared with 90 percent of women who did not have a child in their teens. Also, high school completion rates among teen mothers vary considerably by race. Black teen mothers—in both a 1990s study and a 1970s study—had the highest high school completion rates compared with whites and Hispanics. Research shows that a large percentage of teenage mothers eventually become welfare recipients. Data from a 1990 Congressional Budget Office report show that almost half of all teen mothers and three-quarters of unmarried teen mothers received AFDC within 5 years of giving birth. By contrast, only about one-quarter of married teen mothers received AFDC during the same time period. In our 1994 report, we similarly found that women who gave birth as teenagers made up nearly half of the unmarried AFDC caseload. Also, survey data from 1995 show that 69 percent of births to teens in a 5-year period were paid for by Medicaid or other government sources. Substance use among teen mothers is comparable to that for other women their age. In a national survey, about one-sixth of teen mothers aged 15 to 19 reported any illicit drug use in the past month, while about one-third reported alcohol and one-third cigarette use during that time. Similar percentages of women without children in those age groups reported using those substances in the past month. Smoking during the pregnancy, by contrast, appears lower for teens than for their peers with children. Compared with the one-third of teen mothers aged 15 to 17 who smoke, about 17 percent of mothers that age who gave birth in 1995 reported smoking while they were pregnant. However, smoking cigarettes during pregnancy varied by race or ethnicity; about 28 percent of white teen mothers reported smoking during pregnancy, compared with about 5 percent of black or Hispanic teenage mothers. Certain social factors, such as the teen’s level of school involvement or family background and income, appear to influence the likelihood that a woman will give birth in her teenage years. Generally, lower school involvement, unstable family structure, and declining family income are associated with an increased likelihood of teen births. According to one study, teens who experienced multiple risk factors such as early school failure, poverty, or family dysfunction were more likely to become teenage mothers. Beyond a few factors, which had similar effects across the groups studied, the impact of other social factors on the likelihood of teen births varied by racial or ethnic group. Family instability, such as divorce and remarriage; declining family income, such as with job loss; and lower standardized test scores were associated with an increased likelihood of a teen birth, while family stability, increasing family income, and higher standardized test scores were associated with a reduced likelihood of birth for each group studied. Staying in school and living in two-parent families were associated with a lower risk of birth for white and Hispanic teens but had no effect for black teens. Socioeconomic status (SES) also had a mixed effect across racial groups. Lower SES was associated with an increased likelihood of a teen birth for Hispanic teens, a decreased likelihood for black teens, and had no effect on white teens. Higher SES had the opposite effect. Living in female-headed single-parent families was associated with an increased likelihood of a birth for black teens but had no effect for white teens. And only white teens were more likely to become teen mothers if their mothers had also been teen mothers. (See fig. 6.) Research indicates a link between school involvement and teen births. A national study of girls who were eighth-graders in 1988, found several measures of school involvement, including dropping out, were associated with a greater risk of a subsequent teen birth. However, only one measure—lower standardized test scores—was consistently associated with an increased risk of a teen birth in all racial and ethnic groups. Other measures, such as lower grades or limited postsecondary education plans, were associated with an increased likelihood of a teenaged birth for one or more races but not for all. For example, lower grades in school were associated with an increased likelihood of a school-aged pregnancy leading to a birth for white and black teens. (See fig. 6.) Teenage women who dropped out of school were more likely than those who stayed in school to become pregnant and give birth in their teens. However, an association between dropping out of school and teen pregnancy was observed only among whites and Hispanics. After controlling for family background and measures of school involvement and performance, white and Hispanic teens who dropped out of school were about 1.5 times more likely to become a teenage mother than white and Hispanic teens who stayed in school. For black teens, drop-out status had no effect on teen pregnancy. Moreover, of school-age teens who gave birth, more than one quarter (28 percent) dropped out of school prior to pregnancy; an additional 30 percent dropped out after the pregnancy or birth of a child, and 42 percent stayed in school. These findings are consistent with those of a study of teen experiences in the 1970s and early 1980s. Limited postsecondary education plans were associated with a greater likelihood of a school-aged birth for black and Hispanic teens. Descriptive studies have generally found a lower risk of teen birth in two-parent families than with other family types. A study of the effects of changes in family structure—such as divorce, appearance of a stepparent, going to live with grandparents or in an institution—on teen women found that the greater the number of such changes, the greater the probability of an early teen birth, regardless of family income. (See fig. 6.) The impact of family structure or family instability, however, varied by race or ethnicity. For example, one study found that being born into and reared through early childhood in a single-parent family headed by a woman was associated with higher likelihood of a birth for black teens but not for white teens. Another recent study found that living in a two-parent “intact” family during the eighth grade was associated with less risk of birth for white and Hispanic teens—but not for black teens. (See fig. 6.) Another factor associated with teen births only among white teens was having a parent who was also a teenage mother. Some descriptive research suggests that teens from lower income families have a greater likelihood of a having a teen birth than teens from higher-income families. However, recent multivariate analysis shows that the effect of SES on teen births varies by race and ethnicity. For example, a descriptive analysis of 1988 eighth-graders found that less than 7 percent of those from families with high incomes had had a child by the age of 20, compared with about 37 percent of teenage women from low-income families. However, after controlling for a number of family background characteristics, lower SES was associated with an increased risk of teen pregnancy for Hispanics, lower risk for blacks, but had no effect for whites. (See fig. 6.) Higher SES had the opposite effect for Hispanics and blacks. An analysis of earlier data (1970s and early 1980s), which also controlled for a number of family background characteristics, found a relationship between a decline in family income and the risk of teen births. For example, job loss or other types of income losses were associated with a higher likelihood of a birth among black and white teens. (See fig. 6.) External experts on the data presented reviewed a draft of this report. We included their comments where appropriate. As agreed with your office, unless you publicly announce its contents earlier, we will make no further distribution of this report until 30 days from its issue date. At that time we will send copies to the Secretary of Health and Human Services and other interested parties. We will also make copies available to others upon request. Major contributors were James O. McClyde, Assistant Director, and Barbara Chapman, Evaluator-in-Charge. Please contact me on (202) 512-7119 if you or your staff have any questions about this report. We used studies based primarily on nationally representative data sources to profile mothers who gave birth before age 20. We relied primarily on two types of data sources: national birth certificate information and the most current analyses and data tables from longitudinal surveys and other recent surveys. The national birth certificate data—collected by states and then transmitted to the National Center for Health Statistics for processing and publication—provides comprehensive information on U.S. birth rates over time. Much of the information in this report—including birth rates and trends, marital status, first or later birth, and tobacco use during pregnancy—was derived or calculated from the published 1995 natality statistics. For example, we calculated the percentage of teen births that were second or later births by racial and ethnic group. We requested that Substance Abuse and Mental Health Services Administration (SAMHSA) do a special analysis of data from the National Household Survey on Drug Abuse (NHSDA) 1994-96 in order to compare the drug use of teen mothers with that of teen women without children. To further develop a profile and identify factors associated with teen motherhood, we reviewed studies of nationally representative databases that link information regarding a teen birth to a mother’s education and family background. Specifically, we used the National Longitudinal Survey of Youth (NLSY) launched in 1979, which surveyed a sample of 14- to 21-year-olds and reinterviewed them annually. A more recent survey, the National Education Longitudinal Study of 1988 (NELS:88), followed a nationally representative sample of eighth graders to 1994. We used data from this more recent cohort, particularly in the discussion of education-related issues. We obtained additional information from the National Survey of Family Growth (NSFG), conducted in 1995, as well as several of the NHSDAs done in the 1990s and studies that used them. (See table I.1.) Limitations and lack of comparability among the various data sources restricted our ability to make comparisons or report by race and marital status in some cases. Because information was more readily available on teen mothers as a whole, and three-quarters of teen births in 1995 were to unmarried teens, we often present data on all teen mothers in lieu of specific information on unmarried teen mothers. With few exceptions, the information we present represents the experiences of U.S. teen women in the 1990s. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided social and demographic information about teen mothers, focusing on: (1) trends in birth rates for teens; (2) a profile of teen mothers; and (3) factors, such as education or family background, that may influence the likelihood of teen motherhood. GAO noted that: (1) although the birth rate for teenage women decreased 41 percent from the late 1950s to 1995--paralleling the decline in the U.S. birth rate--the number of babies born to teenagers is still high; (2) births to unmarried teenage mothers, however, more than quintupled as a proportion of total teen births over the same period; (3) as of 1995, the teen birth rate was about 57 per thousand; however, rates varied considerably by subgroup; (4) the birth rates for black and Hispanic teenage women are more than twice those for white teens; (5) in 1995, nearly half of teen mothers were white and most were aged 18 to 19 and unmarried; (6) about two-thirds of recent teen mothers did not intend to get pregnant or have a child; however, about one-fifth of women who gave birth already had one child; (7) teenage mothers also graduate from high school at lower rates than all teen women; (8) 64 percent of teen mothers complete high school, compared with about 90 percent of all teen women; (9) research studies that have examined the antecedents of teen motherhood have shown that limited involvement in school and some family background characteristics--such as family instability and declines in family income--are associated with increased likelihood of teen motherhood; and (10) the effect of most factors varies among racial and ethnic groups.
In January 2006, to better align foreign assistance programs with U.S. foreign policy goals, the Secretary of State appointed a Director of Foreign Assistance with authority over all State and USAID foreign assistance funding and programs. In working to reform foreign assistance, the Director’s office, State/F, has taken a number of steps to integrate State and USAID foreign assistance processes. These steps have included, among others, integrating State and USAID foreign assistance budget formulation, planning and reporting processes. As part of the reform, State/F, with input from State and USAID subject matter experts, developed the Foreign Assistance Framework, with its five strategic objectives, as a tool for targeting U.S. foreign assistance resources; instituted common program definitions to collect, track, and report on data related to foreign assistance program funding and results; and created a set of standard output-oriented indicators for assessing foreign assistance programs. State/F also instituted annual operational planning and reporting processes for all State and USAID operating units. Moreover, State/F initiated a pilot program for developing 5-year country assistance strategies intended to ensure that foreign assistance provided by all U.S. agencies is aligned with top foreign policy objectives in a given country. These integrated processes are supported by two data information systems, known as the Foreign Assistance Coordination and Tracking System (FACTS) and FACTS Info. In July 2009, the Secretary of State announced plans to conduct a Quadrennial Diplomacy and Development Review, intended in part to maximize collaboration between State and USAID. According to State, this review will identify overarching foreign policy and development objectives, specific policy priorities, and expected results. In addition, the review will make recommendations on strategy, organizational and management reforms, tools and resources, and performance measures to assess outcomes and—where feasible—impacts of U.S. foreign assistance. The review will be managed by a senior leadership team under the direction of the Secretary of State and led by the Deputy Secretary for Management and Resources, with the Administrator of USAID and the Director of the Policy Planning serving as co-chairs and with senior representation from State and USAID. Although State has not announced a formal time frame for producing a final report of the review’s results, a senior State official indicated that the process would likely produce initial results in early 2010. Under the Foreign Assistance Framework developed by State/F in 2006, the strategic objective GJD has four program areas—”Rule of Law and Human Rights,” “Good Governance,” “Political Competition and Consensus-Building,” and “Civil Society”—each with a number of program elements and subelements. State/F’s information systems, FACTS and FACTS Info, track funding allocated for assistance in support of GJD and these four program areas. Table 1 shows the four program areas and associated program elements. In fiscal years 2006 through 2008, funds allocated for the GJD strategic objective were provided for democracy assistance programs in 90 countries around the world. Almost half of all democracy funding over this period was spent in Iraq and Afghanistan; the next highest funded countries, Sudan, Egypt, Mexico, Colombia, and Russia, accounted for more than 25 percent of the remaining GJD funding allocated to individual countries other than Iraq and Afghanistan. Of the 20 countries with the largest GJD allocations, 8 have been rated by Freedom House, an independent nongovernmental organization, as not free; 8 have been rated as partly free; and 4 have been rated as free. Figure 1 illustrates the worldwide distribution of GJD funding, and table 2 shows funding levels and Freedom House ratings for the 20 countries with the largest with the largest allocations. allocations. USAID, State DRL, and NED fund democracy assistance programs in countries throughout the world. USAID’s and State DRL’s foreign assistance programs are funded under the Foreign Operations appropriation and tracked by State as part of GJD funding, while NED’s core budget is funded under the State Operations appropriation and is not tracked as part of GJD foreign assistance funding. U.S. Agency for International Development. In fiscal years 2006 through 2008, USAID democracy programs operated in 88 countries worldwide. USAID’s Office of Democracy and Governance, based in Washington, D.C., supports USAID’s democracy programs worldwide, but these programs are primarily designed and managed by USAID missions in the field. USAID democracy programs cover a large variety of issues including media, labor, judicial reforms, local governance, legislative strengthening, and elections. USAID programs are managed by technical officers, typically based in missions in the field, who develop strategies and assessments, design programs, and monitor the performance of projects by collecting and reviewing performance reports from implementing partners and conducting site visits, typically at least monthly. Bureau of Democracy, Human Rights and Labor. State DRL implements the Human Rights Democracy Fund, established in fiscal year 1998, providing grants primarily to U.S. nonprofit organizations to strengthen democratic institutions, promote human rights, and build civil society mainly in fragile democracies and authoritarian states. In 2006 through 2008, State DRL’s programs operated in 66 countries worldwide. According to State, State DRL strives to fund innovative programs focused on providing immediate short term assistance in response to emerging events. In addition, State DRL can also fill gaps in USAID democracy funding (see app. II). Unlike USAID, State DRL manages its democracy grant program centrally. State DRL’s Washington-based staff monitor these grants by collecting and reviewing quarterly reports from grantees and conducting site visits, typically through annual visits to participating countries. National Endowment for Democracy. In 1983, Congress authorized initial funding for NED, a private, nonprofit, nongovernmental organization. NED’s core budget is funded primarily through an annual congressional appropriation and NED receives additional funding from State to support congressionally directed or discretionary programs. The legislation recognizing the creation of NED and authorizing its funding, known as the NED Act, requires NED to report annually to Congress on its operations, activities, and accomplishments as well as on the results of an independent financial audit. The act does not require NED to report to State on the use of its core appropriation; however, State requires NED to provide quarterly financial reporting and annual programmatic reporting on the use of the congressionally directed and discretionary grants it receives from State. NED funds indigenous partners with grants that typically last for about a year. NED monitors program activities through quarterly program and financial reports from grantees and site visits, performed on average about once per year, to verify program and budgetary information. About half of NED’s total annual core grant funding is awarded to four affiliated organizations, known as core institutes. The remaining funds are used to provide hundreds of grants to NGOs in more than 90 countries to promote human rights, independent media, rule of law, civic education, and the development of civil society in general. State/F information systems show allocations of approximately $2.25 billion in GJD funding to operating units in fiscal year 2008, with about 85 percent of this amount allocated for State and USAID field-based operating units, primarily country missions. The estimated average annualized funding for democracy assistance projects active in our 10 sample countries as of January 2009 was $18 million for USAID, $3 million for State DRL, and $2 million for NED. In fiscal year 2008, more than half of State DRL funding for democracy assistance went to Iraq, followed by China, Cuba, Iran, and North Korea, and most NED funding for democracy programs went to China, Iraq, Russia, Burma, and Pakistan. Data from State/ F information systems, which report GJD allocations by operating unit, indicate that most GJD funding allocated in fiscal year 2008 went to country programs. The State/F systems show that, of more than $2.25 billion allocated for GJD in fiscal year 2008, approximately $306 million, or almost 15 percent, went to operating units in Washington, D.C., including USAID and State regional and functional bureaus and offices such as State DRL. More than $1.95 billion, or about 85 percent of the total allocation, was allocated to field-based operating units, primarily country missions. (See fig. 2 for the allocation of GJD funding by type of operating unit, for fiscal year 2008. See app. IV for a list of Washington, D.C.-based and field-based operating units that received GJD funds in fiscal years 2006-2008.) Figure 3 shows the distribution of democracy assistance funding for the four GJD program areas. Although State/F information systems enable reporting of democracy assistance allocations to operating units and by program area, these systems do not include funding information by implementing entity for the years we reviewed—fiscal years 2006 through 2008. Consequently, State/F data on GJD funding allocations to implementing entities— including the portion of allocations to field-based operating units that is programmed by each implementing entity—are not centrally located. However, in response to our request for information on USAID democracy assistance funding, State/F and USAID compiled data provided by USAID missions on GJD funding allocated to USAID for most country-based operating units for fiscal years 2006 through 2008. According to these data, USAID implements the majority of the democracy funding provided in most countries. In addition, State/F data show that the largest portion of GJD funding in fiscal year 2008 was allocated for the Good Governance program area (see fig. 3). (App. II shows amounts of USAID, State DRL, and NED funding distributed to all countries in fiscal years 2006-2008 as well as each country’s Freedom House rating.) Estimated average annualized funding for all active democracy assistance projects in the 10 sample countries was about $18 million per year for USAID (78 percent of the total estimated average annual funding for all three entities), $3 million for State DRL, and $2 million for NED. Annualized funding per project averaged more than $2 million for USAID; more than $350,000 for State DRL; and more than $100,000 for NED. Project length averaged 3 years for USAID, 2 years for State DRL, and 1 year for NED (see fig. 4). According to award data for USAID, State DRL, and NED, USAID provided the majority of funding for democracy assistance projects that were active as of January 2009 in 9 of the 10 sample countries (see fig. 5). USAID funding ranged from 10 to 94 percent, with a median of 89 percent, of the three entities’ total democracy assistance funding in each country. USAID’s country-based missions are typically responsible for developing democracy assistance activities based on country-specific multiyear democracy assistance strategies, which they develop in the field with input from embassy officials as well as USAID and State offices in Washington, D.C. Once the strategic plan is approved, individual programs are designed to fit into the overall priorities and objectives laid out in the strategic plan. This program design includes the procedures to select the implementer and to monitor and evaluate program performance. USAID missions typically collaborate with the USAID Office of Democracy and Governance to develop and carry out in-depth democracy and governance assessments to help define these strategies. These assessments are intended to identify core democracy and governance problems and the primary actors and institutions in a country. For example, the USAID mission in Indonesia conducted a democracy and governance assessment in June 2008, which formed the basis for a new 5-year democracy and governance strategy for 2009 to 2014. The assessment, which was commissioned by the USAID Office of Democracy and Governance and conducted by an outside contractor, involved consultation with more than 100 Indonesian government officials, civil society representatives, local academics, and other international donors involved in democracy and governance in Indonesia. USAID democracy activities vary in each country, according to the operating environment, needs and opportunities. For example, as of January 2009, USAID’s democracy assistance portfolio in Lebanon amounted to $24.3 million on an annual basis. The majority of this funding—65 percent—was awarded for Good Governance activities such as assistance to the Lebanese Parliament, and programs to improve service delivery through municipal capacity building. In Indonesia, about 70 percent of USAID funding for projects active in January 2009 was for Good Governance–related assistance to help the Indonesian government with a major effort to decentralize its government. Conversely, in Russia, where USAID does not work closely with the Russian government, over 50 percent of USAID funding supported Civil Society programs and only about 13 percent of funding supported active projects in the area of Good Governance. USAID implements approximately half of the value of its democracy programs using grants and implements the remaining half using contracts. Worldwide, USAID democracy contract funding tends to be much higher than USAID grant funding; in fiscal year 2008, democracy contract funding averaged about $2 million per project and democracy grant funding averaged almost $850,000 per project. However, USAID implements more than twice as many projects with grants than with contracts; thus, although USAID contracts are higher in funding, USAID democracy funding is fairly evenly split between contracts and grants. In fiscal year 2008, about 53 percent of USAID democracy funding was implemented through contracts and 47 percent was implemented through grants. Table 3 shows USAID’s average global funding for democracy contracts and grants in fiscal year 2008. State DRL funded democracy programs in more than 30 countries in a variety of program areas in fiscal year 2008, spending 57 percent of its funds in Iraq and 28 percent in China, Cuba, Iran, and North Korea. Funds managed by State DRL totaled $157 million in fiscal year 2008, $75 million of which was allocated through a supplemental appropriation for democracy programs in Iraq. Only a small portion of State DRL-managed funding in that year—$13 million of $157 million—was discretionary; most of the funding was congressionally directed for specific countries or issues. In planning resource allocations as well as solicitations for statements of interest and requests for proposals from NGOs, State DRL staff members consult with USAID and State regional bureaus, and review country mission strategic plans and operational plans, according to a State DRL official. Proposals are reviewed by a 7-person panel, which includes representatives from State DRL, USAID, and State regional bureaus. According to a State DRL official, the bureau does not prepare country strategies for its democracy grant program because funding levels are relatively small for most countries and fluctuate from year to year. NED funded democracy programs in more than 90 countries in fiscal year 2008, spending 28 percent of its funds on programs in China, Iraq, Russia, Burma, and Pakistan. Unlike USAID and State DRL, NED allocates democracy funds relatively evenly across many countries, with average per-country funding of almost $1 million in fiscal year 2008. In fiscal year 2008, NED’s funding allocation for democracy programs totaled $118 million. NED makes programming decisions on specific projects in the context of its current 5-year strategic plan, published in 2007, and an internal annual planning document. For each region of the world, the annual planning document identifies regional priorities and critical sectors—such as human rights and freedom of information—in which to target assistance. According to a NED official, NED solicits proposals from NGOs every quarter. After grant proposals are received, NED conducts an internal review and the proposals that are selected are presented to the NED board of directors for approval. Figure 6 shows the countries where State DRL and NED, respectively, allocated the largest amounts for democracy programs in fiscal year 2008. (Dollars in millions) To help ensure complementary programming and avoid duplication in their respective democracy assistance programs, State DRL invites USAID missions to review State DRL proposals for democracy assistance projects. In addition, State DRL officials sometimes participate in USAID missions’ planning for democracy assistance projects. However, USAID and State DRL officials are often not aware of NED democracy assistance projects, and although NED is not required to report on all of its democracy assistance projects, State DRL officials and USAID mission officials said that information on all NED’s active projects would be useful in ensuring coordinated assistance. USAID officials participate in embassy working groups or committees that review democracy assistance projects, among others, to ensure that projects are complementary. State DRL—which manages its democracy grant program centrally, without embassy-based staff—solicits feedback from USAID missions in both the development of State DRL’s solicitations for democracy programs and the resulting project proposals from NGOs. As part of State DRL’s formal process for evaluating democracy assistance project proposals, USAID and State regional bureau representatives participate in State DRL’s project review panels and vote on proposals, conveying feedback from USAID country missions and embassies as to whether project proposals complement or duplicate ongoing democracy assistance efforts of USAID and other State entities. USAID officials at the 10 missions we contacted generally agreed that this process helps to ensure complementary programming between State DRL and USAID. In just one instance, a USAID mission official remarked that a review panel had approved a State DRL proposal for civil society training that could duplicate an existing USAID project. According to a State DRL official, the review panels take into account the missions’ and embassies’ feedback but may vote to approve a project on the basis of other factors. In addition, State DRL officials are involved in some aspects of USAID missions’ democracy assistance planning. State DRL officials who manage the bureau’s democracy grants participate with USAID’s Office of Democracy and Governance in providing input on democracy funding levels as a part of the budget formulation process and have the opportunity to review and comment on all country operational plans, according to State officials. State officials also noted that State DRL as a bureau is involved in many strategic discussions about democracy assistance that is provided through bilateral programs; however, State DRL officers generally are not involved in USAID missions’ planning for democracy assistance projects. According to State DRL officials responsible for grants in our 10 sample countries, increased integration into USAID’s planning process would better inform State DRL programming decisions and ensure better coordination between State and USAID. State DRL officials noted that this would also increase the opportunity for State DRL to share its expertise as the bureau responsible for U.S. human rights and democracy policy. However, State DRL and USAID officials commented that increasing the level of coordination between State DRL’s staff and USAID missions in USAID’s planning process could be challenging, because State DRL staff typically have resources to travel to countries only once per year as part of their grant oversight duties. According to USAID officials, USAID selects its projects based on multiyear democracy assistance strategies developed at country- based missions; the development of individual USAID democracy assistance projects and selection of implementing partners also generally takes place at the missions. USAID mission officials also noted that their review process for selecting implementing partners, which takes place in the field, generally lasts 10 to 15 days. In addition, a State/F official observed that for most countries, State DRL’s level of funding for its grant program would likely be too small to justify the additional staff time necessary for increasing their involvement in USAID’s mission-based planning processes. Despite the challenges related to State DRL involvement in USAID planning, we found that USAID missions included State DRL staff in joint planning activities for 2 of our 10 sample countries. For example, the USAID mission in Russia invited a State DRL official to participate in an interagency visit to the country in 2008 to review current U.S. democracy assistance efforts and consider areas for future programming. The State DRL official involved in the visit noted that this effort helped her identify potential areas where State DRL could target its assistance to complement USAID’s larger, longer-term democracy program. In China—the only country in our sample where State DRL funds a larger portfolio of democracy projects than does USAID—a State DRL official participated in vetting proposals for a USAID Rule of Law project in China that began in 2006. State DRL official did not participate in planning the solicitation for the proposals, and USAID did not invite State DRL to participate in its planning or proposal vetting for subsequent Rule of Law projects in China. More recently, State DRL and USAID staff met with embassy staff in Beijing to collaborate on their respective democracy assistance programs. However, according to a State DRL official, it is not clear what role State DRL will have in USAID’s future strategic planning process for assistance in China or in reviewing USAID’s future democracy project proposals there. The development of joint State-USAID country assistance strategies (CAS), which State/F is piloting as part of its foreign aid reform efforts, is expected to improve coordination of State and USAID foreign assistance, according to State/F officials. However, as we reported in April 2009, the CAS, unlike USAID’s country strategies, contains only high-level information, which could limit its impact on interagency collaboration. State piloted this new strategic planning process in 10 countries in fiscal year 2008 and was reviewing the results of the pilot as of August 2009. Consequently, according to State and USAID officials, it is not yet clear what form the new process will take; it also is not clear whether or how the process may affect interagency coordination of democracy assistance efforts. USAID and State DRL officials responsible for managing democracy assistance in our 10 sample countries have often lacked basic information about NED’s democracy projects, which they believe would be useful in ensuring coordinated assistance. No mechanism currently exists for the routine sharing of information on NED’s core-funded projects outside the Europe and Eurasia region. In 4 of our 10 sample countries, USAID mission officials told us that they were not aware of NED-supported activities in the country, despite the presence of several active NED projects. Several USAID mission officials stated that more knowledge of NED’s projects would be useful for ensuring that U.S.-supported assistance is well coordinated. State DRL officials responsible for planning and managing democracy grants in 7 of the 10 sample countries also told us that they were not aware of NED’s current projects, and State DRL officials responsible for managing projects in 5 of these 7 countries said that receiving timely information on NED’s projects would improve coordination and help reduce the possibility of duplicative programming. In particular, State DRL officials stated that knowledge of NED’s activities in a given country would help inform their own planning decisions regarding which projects to support. State has access to NED’s annual report to Congress on its core grant activities. However, State DRL officials noted that they cannot rely on this report for complete information about NED’s activities, because the report may exclude many projects that go into effect after the report is published. Although NED is under no obligation to report to State on the projects it funds with its core U.S. appropriation, NED also regularly provides information on its core-funded and non-core-funded projects to State in some instances. For example, in addition to annually reporting, NED provides quarterly updates on both proposed and active projects in former Soviet Union and Eastern Europe countries to State’s Office of the Coordinator of U.S. Assistance to Europe and Eurasia (EUR/ACE). EUR/ACE officials stated that they circulate information on NED’s proposed and active projects to the relevant USAID missions and U.S. embassies, as well as to Washington counterparts in DRL and regional State and USAID bureaus, to keep them informed and that they also solicit any feedback that might be useful to NED on an advisory basis only. EUR/ACE officials noted that because EUR/ACE exists expressly to coordinate all foreign assistance in its geographic regions, staff resources are available to collect and disseminate this information; according to these officials, other geographic State bureaus may not have access to such resources. NED officials told us that, although there is no mechanism for routine information sharing on NED projects, NED provides information to State and USAID when asked. NED officials also said that the organization does not oppose sharing with State or USAID information on projects that the NED board has approved. The officials stated that NED would be willing to provide project information routinely if State or USAID deemed it useful. However, NED and State officials also indicated that any attempt to increase NED’s sharing of information with State DRL should be designed to minimize additional administrative burden and avoid straining State DRL’s available staff resources. USAID mission and embassy officials involved in democracy assistance in our 10 sample countries collaborate regularly, typically through working groups or committees at posts. For example, in Indonesia, an anticorruption working group that includes USAID, Department of Justice, and State officials from the embassy’s political and economic sections meets monthly at the embassy. According to USAID officials, this group has discussed various anticorruption-related programs to ensure that their efforts are complementary. The embassy in Indonesia also convenes a parliamentary working group, a counterterrorism and law enforcement working group, and an ad hoc working group on elections involving many of the same representatives. In addition, during our review of 10 sample countries, USAID officials in Russia told us of a working group that meets at the embassy to coordinate all U.S. foreign assistance, including democracy assistance. Also, according to State officials, the embassies in Lebanon and Kosovo have each established a staff position devoted to coordinating U.S. assistance. The State officials noted that these staff have facilitated interagency coordination among the various U.S. programs involved in democracy assistance in these countries. In addition to participating in embassy-based interagency working groups and committees, mission officials also reported regularly collaborating, both informally and formally, with State officials at posts such as political and public affairs officers. In particular, in our survey of 31 USAID mission officials responsible for managing democracy assistance projects, 25 officials identified collaboration with the embassy political section, 21 officials identified collaboration with the embassy public affairs section, and 10 officials identified collaboration with the embassy law enforcement section as being at least somewhat important to their current projects. Our survey respondents also showed that State officials often reviewed USAID democracy project proposals. Specifically, 13 respondents identified the embassy political section as being somewhat, moderately, or very involved in reviewing USAID’s democracy project proposals. Six respondents identified the embassy public affairs section, and two respondents identified the embassy law enforcement section, as being at least somewhat involved in reviewing the proposals. USAID uses standard indicators to report quantitative information on immediate results of its democracy assistance programs and develops additional custom indicators to assess specific projects. In addition, USAID sometimes commissions longer-term independent evaluations of program impact. USAID reported taking several actions to improve its evaluation capacity in response to the 2008 National Research Council study that the agency commissioned. USAID uses standard indicators to assess and report the outputs—that is, numbers of activities and immediate results—of its democracy assistance programs. State/F developed the standard indicators with input from subject matter experts in DRL and USAID’s Office of Democracy and Governance. The indicators, which are linked to State/F’s program objectives, areas, and elements, are intended to facilitate the aggregating and reporting of quantitative information common to foreign assistance programs across countries. For the GJD program areas, there are 96 element-level standard indicators (see table 4 for examples). USAID uses the standard indicators in performance reports that summarize project activities, achievements, and difficulties encountered. According to USAID officials, in addition to using these standard indicators to measure program outputs, USAID uses custom indicators for virtually every project to measure program outputs, outcomes, and impacts that are not captured by the standard indicators. Some USAID officials we spoke with informed us that they use project-specific custom indicators that are more outcome focused than the standard indicators. For example, USAID’s Jordan mission uses customized project indicators associated with each GJD program area; for the program area Good Governance, one such indicator is “improved capacity of the legislative branch and elected local bodies to undertake their stated functions.” Of the USAID technical officers we surveyed, more than two-thirds (22 of 31) said that custom indicators were very useful for monitoring and evaluating projects and assessing impact. USAID management officials also noted the importance of custom indicators in assessing the impact of democracy assistance projects. To complement the data collected with the standard and custom indicators, USAID also commissions some independent evaluations of the longer-term impact of its democracy assistance, although such evaluations are relatively infrequent. State/F’s and USAID’s March 2009 joint guidelines for evaluating foreign assistance state that mission staff may decide whether and when to commission evaluations, based on management needs among other considerations. Evaluations of USAID assistance efforts have decreased in frequency since the mid-1990s. In 1995, USAID eliminated a requirement that every major foreign assistance project undergo midterm and final evaluations; according to USAID officials, the requirement was eliminated because the evaluation requirement of every project was seen as too resource intensive relative to the value added. As a result of this change in policy, the number of evaluations across all areas of development assistance dropped from approximately 340 in 1995 to about 130 in 1999, according to a 2001 review. Our analysis of documentation from the 10 sample countries shows 7 independent evaluations commissioned in fiscal years 2006 through 2008. Some USAID mission officials we met with noted that they conducted few independent evaluations of democracy assistance because of the resources involved in the undertaking and the difficulty of measuring impact in the area of democracy assistance. For example, one technical officer responded on our survey that “behavior change is difficult to measure and change in democracy is not seen overnight. It is a long process difficult to measure.” In addition, senior USAID officials we spoke to in the three countries we visited stated that it is difficult to demonstrate causality between projects and improvements in a country’s democratic status. On the other hand, USAID mission officials in all of our 10 sample countries stated that evaluations are useful to monitoring, evaluating, and identifying lessons learned. In addition, in our survey six of eight technical officers who responded on the usefulness of independent evaluations responded that they are either very or moderately useful to monitoring and evaluation. USAID officials at headquarters as well as at several missions we contacted told us that because of the infrequency of independent evaluations, USAID missions use, as a proxy for such evaluations, internal program assessments of a country’s need for democracy programming (called sector and subsector assessments). More than half of the USAID technical officers we surveyed said that they found these assessments moderately or very useful in monitoring and evaluating their current projects. The three overall sectorwide assessments that we reviewed— for Kosovo, Indonesia, and the Democratic Republic of the Congo—follow the assessment structure recommended in USAID guidance, which emphasizes strategic recommendations rather than program performance results. In line with that guidance, these assessments provide general, high-level comments on program results, rather than evaluative information, and do not include either evidence supporting the results statements or references to evaluation documents. We also examined 10 subsector assessments (not subject to the sector assessment guidance). Three of the 10 included significant information about the results of specific programs, while others included no reference or only a brief reference to the results or outcomes of specific USAID democracy projects. Recognizing the need for evaluations of its democracy assistance programs’ impacts, in 2008 USAID commissioned a review of its program evaluation practices and problems by the National Research Council. According to the report’s findings: USAID has lost much capacity to assess the impact and effectiveness of its programs. The number of evaluations undertaken by USAID has declined. The evaluations undertaken generally focus on implementation and management concerns and have not collected the data needed for sound “impact” evaluations. Most current evaluations do not provide compelling evidence of the impacts of the programs. Most evaluations usually do not collect data that are critical to making the most accurate and credible determination of project impacts. Most evaluations tend to be informative and serve varied purposes for project managers. The National Research Council report outlines techniques for improving the monitoring and evaluation of projects, developing methodologies for retrospective case studies, and other means of collecting and analyzing data that will allow USAID to more reliably gauge impact and improve strategic planning and programming decisions. Following the release of the report, the USAID Office of Democracy and Governance formed an internal initiative to formulate how to implement the report’s recommendations. According to USAID data provided to GAO, as of June 2009, the office reports taking several actions in response to these recommendations. Table 5 shows the National Research Council’s recommendations and USAID’s reported actions. Democracy promotion is one of five strategic objectives for U.S. foreign assistance. Given the need to maximize available resources to pursue this important objective, coordination among the entities providing democracy assistance is essential to ensure that these efforts are complementary and not duplicative. USAID and State DRL have processes in place to facilitate coordination of their programs—for example, State and USAID officials in the field review State DRL project proposals to minimize duplication, and USAID officials regularly participate in interagency meetings with embassy officials to help ensure that their agencies’ democracy-related projects are complementary. However, lacking access to current information about NED’s activities, State and USAID officials are constrained in their efforts to fully coordinate their activities with NED’s in the many countries where they and NED each provide democracy assistance. Although NED is not required to report to State on all of its activities, NED regularly shares useful information with State regarding democracy projects in the former Soviet Union and Eastern Europe, and NED indicated its willingness to also routinely provide information on its projects in other countries. To enhance coordination of U.S.-funded democracy assistance efforts, and in support of the Department of State’s first Quadrennial Diplomacy and Development Review, we recommend that the Secretary of State and the USAID Administrator, while recognizing NED’s status as a private nonprofit organization, work jointly with NED to establish a mechanism to routinely collect information about NED’s current projects in countries where NED and State or USAID provide democracy assistance. USAID, State, and NED provided written comments regarding a draft of this report, which are reprinted in appendixes V, VI, and VII, respectively. State also provided technical comments separately, which we incorporated as appropriate. In its written comments, USAID agreed with our recommendation, noting that its country missions and Bureau for Democracy, Conflict, and Humanitarian Assistance would benefit from information on current NED projects. USAID also noted that the current coordination mechanism in State’s Europe and Eurasia Bureau appears to be effective and may serve as a model for worldwide efforts. In our report, we highlight the important role of that bureau’s Office of the Coordinator of U.S. Assistance to Europe and Eurasia, which exists expressly to coordinate all foreign assistance in its geographic regions, but note that other geographic State bureaus may not have access to the resources available to this office. USAID’s written comments suggested several additions to our report’s description of the agency’s planning and evaluation processes; we incorporated these suggestions as appropriate. State also concurred with our recommendation. State responded that improved coordination with NED could enhance the effectiveness of U.S. democracy assistance and agreed to work with USAID and NED to assess how to develop a cost-effective and sustainable process for doing so. State also noted that coordination and information sharing have improved in recent years as a result of foreign assistance reform efforts and that State DRL includes relevant U.S. agencies in its planning and program solicitation process. NED concurred with our recommendation as well, noting that sharing information about its programs with other providers of democracy assistance helps avoid duplication of effort and also helps providers develop their program-related strategies. NED stated that a mechanism for collecting information on its current projects should be designed to minimize additional administrative burden and avoid straining staff resources on all sides. In addition, NED highlighted the monitoring and evaluation efforts it undertakes and referred to its March 2006 report to Congress, Evaluating Democracy Promotion Programs, which we also cite in our report’s discussion of challenges associated with assessing the impact of democracy assistance. We are sending copies of this report to interested congressional committees, the Secretary of State, the Acting Administrator of USAID, and other interested parties. In addition, this report is available on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact David Gootnick at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals who made key contributions to this report are listed in appendix VIII. Our objectives were to (1) describe democracy assistance funding provided by the U.S. Agency for International Development (USAID), and the Department of State’s Bureau of Democracy, Labor and Human Rights (State DRL), and the National Endowment for Democracy (NED) in fiscal year 2008; (2) examine USAID, State DRL, and NED efforts to coordinate their democracy assistance activities to ensure complementary programming; and (3) describe USAID efforts to assess results and evaluate the impact of its democracy assistance activities. To accomplish our objectives, we analyzed funding, planning, and programmatic documents describing U.S. democracy assistance activities provided by USAID, State DRL, and NED in fiscal years 2006 through 2008. We conducted audit work in Washington, D.C., and in three countries: Indonesia, Jordan, and Russia. We also collected information on democracy programs in the following seven additional countries: China, Democratic Republic of the Congo, Haiti, Kosovo, Lebanon, Nicaragua, and Pakistan. In total, we collected detailed information on U.S. democracy programs in 10 countries. We selected these 10 countries to reflect geographic diversity and provide examples of countries with significant levels of U.S. funding for the strategic objective Governing Justly and Democratically (GJD) and that have multiple U.S. or U.S.-funded entities providing democracy assistance, such as USAID, State DRL, and NED. However, this sample of 10 countries is not intended to be representative of all countries receiving U.S. democracy assistance. Moreover, we did not include Iraq and Afghanistan in our sample, despite the very large levels of U.S. democracy assistance funding provided there, because of the unique circumstances in these two countries. In the three countries we visited, we met with USAID officials responsible for democracy assistance programs, selected non-governmental organizations receiving USAID, State, and NED grants or contracts to provide democracy assistance, and country government officials in Indonesia and Jordan. For all 10 countries in our sample, we interviewed the USAID Democracy and Governance directors at the USAID missions (either in person or by telephone) and administered a survey to 31 USAID technical officers with responsibility for managing active democracy and governance grants in these countries. We also interviewed State DRL policy and program officers responsible for managing the bureau’s democracy grants in the 10 countries. To obtain the views of USAID mission officials in our 10 sample countries regarding interagency coordination and project monitoring and evaluation, we conducted an e-mail survey of all 35 USAID technical officers with responsibility for managing active democracy and governance grants in these countries, receiving 31 responses, from April to June 2009 (a response rate of 89 percent). Our survey included questions on collaboration with other U.S. government agencies, overlap of USAID programs with those of other agencies, cooperation with implementing partners, site visit activities, and monitoring and evaluation practices. We pretested our survey with seven technical officers in Indonesia, Jordan, and Russia. In collecting and analyzing the survey data, we took steps to minimize errors that might occur during these stages. To describe the funding levels for U.S. democracy assistance for each entity involved in these activities, we collected funding allocation data. From State/F we collected and analyzed data on GJD funding allocations to each operating unit from fiscal years 2006 through 2008, which was generated using the FACTS Info database. Because State/F data systems do not include GJD funding by implementing agency, State/F and USAID compiled data at our request on GJD funding allocated to USAID for each country operating unit for fiscal years 2006 through 2008. We also obtained funding allocation data by country for fiscal years 2006 through 2008 directly from State DRL and NED. We also collected funding data on all democracy-related Millennium Challenge Corporation (MCC) threshold grants directly from MCC and available funding information on democracy-related assistance provided by State’s Middle East Partnership Initiative (MEPI) and the Bureau of International Narcotics and Law Enforcement Affairs (State INL). To obtain information on active democracy programs in our 10 sample countries, we contacted the USAID mission in each country to obtain a list of all projects active during January 2009 and the corresponding funding obligations for each project. In addition, we contacted State DRL and NED to obtain lists and respective funding levels for all active projects in those 10 countries. To compare these projects with varying duration and funding levels, we annualized the funding of each project and portfolio. Specifically, we based the annualized funding of active projects on the average monthly cost of each project (total project funding divided by the length of the project in months), multiplied by 12; and we summed the annualized funding for each project to obtain the annual value of the USAID, State DRL, and NED portfolios. To assess the reliability of the global funding information on U.S. government democracy assistance from the F database, we checked that the congressionally appropriated amount for GJD in fiscal years 2006 through 2008 matched the amounts provided to us by State/F. To assess the reliability of the country-level data provided by State/F on GJD allocations to USAID at country missions in fiscal years 2006-2008, we compared these data to the information USAID missions provided to us directly for our 10 sample countries. We also discussed with State/F how they conducted this data call and data reliability issues. Regarding the State DRL data we use in this report, State DRL officials noted that the data provided on funding levels for each country are based on individual grant awards. Correspondingly, to verify both the country-level and project-level data, we compared State DRL’s data to information in copies of grant agreements of all active State DRL projects in the three countries we visited (Jordan, Russia, and Indonesia). To verify the reliability of the USAID data on individual active democracy programs we received from USAID missions for our 10 sample countries, we compared the dollar totals of projects contained in the lists they provided us against data on a set of 47 projects detailed by the 31 technical officers we surveyed. To assess the reliability of the NED project-level data for the 10 sample countries, we compared them to project-level data contained on the NED Web site. We found that all data used in this report are sufficiently reliable to present the general levels of democracy funding globally and in individual countries and to present the relative size of project portfolios between USAID, State DRL, and NED. To assess coordination between USAID, State DRL, and NED, we interviewed responsible officials from these three entities and selected grantees and contractors during our field work in Indonesia, Jordan, and Russia to obtain their views on the coordination mechanisms to ensure complementary programming and avoid duplication. For the broader sample of 10 countries, including the 3 countries we visited, we reviewed project descriptions for all active democracy grants and contracts funded by USAID, State DRL, and NED. We also included questions on interagency coordination and examples of duplication in our survey of USAID technical officers as well as interviews of USAID mission and State DRL officials. In assessing U.S. reporting and evaluation efforts, we focused our analysis on USAID efforts and projects since they typically represented the majority of U.S.-funded assistance. We interviewed agency and organization officials, as well as selected implementing partners during our field work in Indonesia, Jordan, and Russia to obtain their views on reporting and evaluation efforts. In our survey of technical officers, we included questions on reporting and evaluation practices. We reviewed selected quarterly and final performance reports of USAID-funded democracy projects in the 10 countries, which are required of USAID’s implementing partners. We also reviewed democracy and governance assessments for the 10 countries, which are conducted as part of USAID missions’ strategy development and project planning efforts. We also discussed the use of performance indicators with USAID, including standard indicators required by State and custom project-specific indicators developed by USAID missions and their implementing partners. In addition, we reviewed USAID assessments to determine the extent to which these assessments provide program results. Moreover, we reviewed independent evaluations from our 10 sample countries completed in fiscal years 2006 through 2008. We did not review State DRL and NED practices for assessing results and evaluating impact, because their programs are small and short term relative to USAID’s and because they generally do not conduct independent evaluations of their activities’ impact. According to State DRL officials, State DRL recommends that its grantees conduct independent external evaluations as part of individual grant awards but has not undertaken standard independent evaluations of democracy assistance at the country or thematic level. NED commissions periodic independent evaluations of clusters of programs but does not evaluate every grant. In addition, we reviewed recent studies that discuss the challenges associated with measuring impact of democracy assistance. In particular, we complemented our findings from interviews and document reviews with findings from the National Research Council study of USAID evaluation capacity. We did not assess the quality or comprehensiveness of this study; we also did not assess USAID’s actions since June 2009 in implementing recommendations from this study, because these actions are preliminary. We conducted this performance audit from September 2008 to September 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 6 shows the USAID, State DRL, and NED democracy funding allocated to each country from fiscal years 2006 through 2008. This table demonstrates that USAID democracy funding is substantially larger than State DRL and NED funding in most countries. Not including Iraq, Afghanistan, and Pakistan, USAID has the majority of funding in 93 percent of countries where USAID has an active portfolio. However, State DRL or NED provides democracy assistance in over 20 countries where USAID funding is not provided. In addition, State DRL democracy funding tends to be larger in countries with lower USAID funding, such as in China and Iran, or where USAID funding for democracy assistance is not provided, such as North Korea or Syria, consistent with State DRL’s focus on filling in the gaps in USAID democracy funding. In fiscal years 2006 through 2008, almost 30 percent of all GJD funds were allocated for democracy activities in Iraq, which is the largest portion of democracy assistance funds allocated to any country over this period. A large and increasing portion of GJD funds are allocated to democracy programs in Afghanistan as well. The percentage of GJD funds allocated to Afghanistan rose from 6 percent in fiscal year 2006, 14 percent in fiscal year 2007, to 24 percent in fiscal year 2008. In fact, in fiscal year 2008, there were more GJD funds allocated to democracy programs in Afghanistan than any other country. Together, GJD funds allocated to Iraq and Afghanistan comprised over 40 percent of all GJD funds in fiscal years 2006 through 2008. In fiscal years 2006 through 2008, total democracy assistance funding increased by 29 percent. However, when excluding Iraq and Afghanistan, which account for nearly half of all democracy spending, democracy funding only rose 20 percent. In addition, not including funding for Iraq and Afghanistan, the 10 countries with the highest GJD funding from fiscal years 2006 to 2008 comprised almost half of the remainder of GJD funding allocated to individual countries over that time period (see table 7). The Department of State’s Middle East Partnership Initiative (MEPI) and Bureau of International Narcotics and Law Enforcement Affairs (State INL) and the Millennium Challenge Corporation (MCC) provide democracy assistance in a much narrower set of countries than USAID, State DRL or NED programs. MEPI, part of State’s Near Eastern Affairs Bureau, was launched in December 2002 as a presidential initiative to promote reform, foster democracy in the Middle East and North Africa, and serve as a tool to address violent extremism. MEPI programs are focused in 17 countries and are managed from MEPI’s office in Washington, D.C., as well as from regional offices in Abu Dhabi and Tunis. MEPI programs are organized generally into four areas, two of which—political participation and women’s empowerment—are characterized as GJD assistance; MEPI funding for these areas in fiscal years 2006 through 2008 totaled about $110 million. Unlike USAID and State DRL programs, which are generally focused on individual countries, MEPI programs are often cross-cutting regional programs that cover a number of different countries. Consequently, it is not possible to identify MEPI funding by country. In addition to providing larger grants in response to specific solicitations, MEPI provides a number of local grants each year directly to organizations working at the community level. For instance, MEPI’s local grants program in Jordan provides funds to less experienced NGOs to increase the NGOs’ capacity and help them become eligible for future funding from larger donors such as USAID. Grant officers in the MEPI office in Washington, D.C., monitor projects through reviews of grantee quarterly reports and rely on staff in the regional offices and embassy-based MEPI coordinators to conduct site visits and coordinate with related USAID assistance programs. State INL’s programs within the GJD framework focus on institution building in the criminal justice sector. State’s FACTS database does not break out State INL’s funding for GJD programs in every country; however, according to a State INL official, the bureau managed $290 million in GJD funding worldwide in fiscal year 2008, directing the majority of these funds to Afghanistan, Colombia, and Iraq. State INL’s programs support reforms such as reform of criminal procedures codes and promotion of adversarial and evidentiary trial principles; training and technical assistance for judges, prosecutors, and defense attorneys; and anticorruption programs. A wide variety of U.S. law enforcement and regulatory agencies, international organizations, NGOs, and international assistance agencies implement State INL’s programs. For example, State INL funds training of prosecutors through the Department of Justice’s Office of Overseas Prosecutorial Development, Assistance and Training. Embassy Law Enforcement Sections oversee State INL programs implemented in the field and they coordinate democracy assistance with USAID through embassy-based interagency working groups. MCC is a U.S. government corporation that provides assistance through multiyear compact agreements with countries that demonstrate commitment to reducing poverty and stimulating economic growth, in part by strengthening their democratic institutions and processes. MCC also funds “threshold programs,” intended to help countries that do not qualify for compact assistance to achieve eligibility. During 2008, MCC had programs providing democracy-related assistance, such as support for anti-corruption and local governance, in 16 countries. Although these threshold grants fit within State’s definition of GJD, State does not track these activities or funding. USAID has primary responsibility for overseeing the implementation of MCC’s threshold programs. USAID monitors MCC threshold programs similarly to its own democracy and governance programs, through quarterly and end-of-project reporting by implementing partners and site visits by technical officers based in USAID missions in the field. In addition, USAID submits quarterly reports on threshold projects to MCC. According to USAID officials we met with in Indonesia and Jordan, management of the MCC threshold projects by USAID mission-based staff—former or current USAID democracy and governance technical officers—facilitated effective coordination with USAID’s democracy and governance programs. MCC has threshold projects related to democracy in select countries that are high in funding (see table 8). For example, in Indonesia, MCC funded a 2-year, $35 million threshold project, which represents a large amount of funding when compared to annual funding of $28 million for the USAID democracy and governance portfolio in Indonesia, $1.1 million for State DRL’s grant program, and $1.6 million for the National Endowment for Democracy. The following are GAO’s comments on USAID’s letter dated September 17, 2009. 1. We have incorporated information provided in USAID’s letter regarding its democracy strategic planning efforts into our report as appropriate. 2. As we state in our discussion of scope and methodology, we did not review State DRL’s and NED’s evaluation efforts because their programs are small and short-term relative to USAID’s and because they generally do not conduct independent evaluations of their activities’ impact. 3. We have incorporated evaluation information provided in USAID’s letter into our report as appropriate. In addition to the contact named above, Leslie Holen, Assistant Director; Diana Blumenfeld; Howard Cott; David Dornisch; Reid Lowe; Grace Lui; and Marisela Perez made key contributions to this report. Etana Finkler provided technical support.
In fiscal years 2006- 2008, the U.S. Agency for International Development (USAID), which has primary responsibility for promoting democracy abroad, implemented democracy assistance projects in about 90 countries. The Department of State's Bureau of Democracy, Human Rights and Labor (State DRL) and the private, nonprofit National Endowment for Democracy (NED) also fund democracy programs in many of these countries. Partly to lessen the risk of duplicative programs, State recently initiated efforts to reform and consolidate State and USAID foreign assistance processes. GAO reviewed (1) democracy assistance funding provided by USAID, State DRL, and NED in fiscal year 2008; (2) USAID, State DRL, and NED efforts to coordinate their democracy assistance; and (3) USAID efforts to assess results and evaluate the impact of its democracy assistance. GAO analyzed U.S. funding and evaluation documents, interviewed USAID, State, and NED officials in the United States and abroad, and reviewed specific democracy projects in 10 countries. Data available from State show total democracy assistance allocations of about $2.25 billion for fiscal year 2008. More than $1.95 billion, or about 85 percent of the total allocation, was provided to field-based operating units, primarily country missions. Although complete data on USAID funding per country were not available, USAID mission data, compiled by State and USAID at GAO's request, show that in a sample of 10 countries, most democracy funds are programmed by USAID. In the 10 countries, annual funding per project averaged more than $2 million for USAID, $350,000 for State DRL, and $100,000 for NED. In fiscal year 2008, more than half of State funding for democracy assistance went to Iraq, followed by China, Cuba, Iran, and North Korea, and NED funding for democracy programs was highest for China, Iraq, Russia, Burma, and Pakistan. USAID and State DRL coordinate to help ensure complementary assistance but are often not aware of NED grants. To prevent duplicative programs, State DRL obtains feedback from USAID missions and embassies on project proposals before awarding democracy assistance grants. State DRL officials generally do not participate in USAID missions' planning efforts; some State and USAID officials told GAO that geographic distances between State DRL's centrally managed program and USAID's country mission-based programs would make such participation difficult. Several USAID and State DRL officials responsible for planning and managing democracy assistance told GAO that they lacked information on NED's current projects, which they believed would help inform their own programming decisions. Although NED is not required to report on all of its democracy assistance efforts to State and there currently is no mechanism for regular information sharing, NED told GAO that it has shared information with State and USAID and would routinely provide them with information on current projects if asked. USAID uses standard and custom indicators to assess and report on immediate program results; USAID also conducts some, but relatively infrequent, independent evaluations of longer-term programs. The standard indicators, developed by State, generally focus on numbers of activities or immediate results of a program, while custom indicators measure additional program results. USAID commissions a limited number of independent evaluations of program impact. USAID mission officials told GAO that they did not conduct many independent evaluations of democracy assistance because of the resources involved in the undertaking and the difficulty of measuring impact in the area of democracy assistance. In response to a 2008 National Research Council report on USAID's democracy evaluation capacity, USAID has reported initiating several steps--for example, designing impact evaluations for six missions as part of a pilot program.
DOD space systems support and provide a wide range of capabilities to a large number of users, including the military services, the intelligence community, civil agencies, and others. These capabilities include positioning, navigation, and timing; meteorology; missile warning; and secure communications, among others. Space systems can take a long time to develop and often consist of multiple components, including satellites, ground control stations, terminals, and user equipment. DOD satellite systems are also expensive to acquire. Unit costs for current DOD satellites can range from $500 million to over $3 billion, and ground systems can cost as much as $3.5 billion. The cost to launch just one satellite can climb to well over $100 million. Most major space programs have experienced significant cost and schedule increases. For instance, program costs for the Advanced Extremely High Frequency (AEHF) satellite program, a protected satellite communications system, had grown 116 percent as of our latest review, and its first satellite was launched over 3.5 years late. For the Space Based Infrared System High (SBIRS High), a missile warning satellite program, costs grew nearly 300 percent and the launch of the first satellite was delayed roughly 9 years. Last year, we reported that contract costs for the Global Positioning System (GPS) ground system, designed to control on-orbit GPS satellites, had more than doubled and the program had experienced a 4-year delay. The delivery of that ground system is now estimated to be delayed another 2 years, for a cumulative 6-year delay. Some DOD officials say even that is an optimistic timeline. Table 1 below provides more details on the current status of DOD’s major space programs. Cost and schedule growth in DOD’s space programs is sometimes driven by the inherent risks associated with developing complex space technology; however, for at least the past 7 years we have identified a number of other management and oversight problems that can worsen the situation. These include overly optimistic cost and schedule estimating, pushing programs forward without sufficient knowledge about technology and design, and problems in overseeing and managing contractors, among others. Some of DOD’s programs in operation were also exceedingly ambitious, which in turn increased technology, design, and engineering risks. While satellite programs have provided users with important and useful capabilities, their cost growth has significantly limited DOD’s buying power—at a time when more resources may be needed to protect space systems and to recapitalize the space portfolio. Since 2013, I have testified that DOD has implemented actions to address space acquisition problems, and most of its major space programs have transitioned into the production phase where fewer problems tend to occur. These range from improvements to cost estimating practices and development testing to improvements in oversight and leadership, such as the addition of the Defense Space Council, designed to bring together senior leaders on important issues facing space. DOD has also started fewer new programs and even those are less ambitious than prior efforts, which helps to reduce the risk of cost and schedule growth. Given the problems we have identified in the GPS program, however, it is clear that more needs to be done to improve the management of space acquisitions. Our past work has recommended numerous actions that can be taken to address the problems we typically see in space programs. Generally, we have recommended that DOD separate the process of technology discovery from acquisition, follow an incremental path toward meeting user needs, match resources and requirements at program start, and use quantifiable data and demonstrable knowledge to move programs forward to next phases. We also have identified practices related to cost estimating, program manager tenure, quality assurance, technology transition, and an array of other aspects of acquisition program management that could benefit space programs. Right now, DOD is at a crossroads for space. Fiscal constraints and increasing threats—both environmental and adversarial—to space systems have led DOD to consider alternatives for acquiring and launching space-based capabilities. For satellites, our reports since 2013 have described efforts such as: disaggregating large satellites into multiple, smaller satellites or payloads; relying on commercial satellites to host government payloads; and procuring certain capabilities, such as bandwidth and ground control, as services instead of developing and deploying government-owned networks or spacecraft. For space launch this includes continuing to introduce competition into acquisitions as well as eliminating reliance on Russian-built rocket engines. In some cases, such as space launch, changes are being implemented. For example, as we reported in April 2015, DOD has introduced competition into acquisitions. In other areas, such as space-based environmental (or weather) monitoring, decisions have just recently been made. In still others, such as protected satellite communications and overhead persistent infrared sensing, decisions on the way forward, including satellite architectures, have not yet been made though alternatives have been assessed. Figure 1 describes some changes DOD is considering in some areas for space. In multiple reports since our last testimony on this subject in April 2015, our work has touched on these and other potential changes. Our reports have specifically covered issues associated with protecting space assets, transforming launch acquisitions, and improving purchases of commercial satellite bandwidth, as well as the development of the GPS ground control system and user equipment. We are also currently examining the analysis used to support decisions on future weather system acquisitions as well as space leadership. All of this work is summarized below. Together, these reports highlight several major challenges facing DOD as it undertakes efforts to change its approaches to space acquisitions. First, though DOD is conducting analyses of alternatives to support decisions about the future of various programs, our preliminary work suggests there are gaps in cost and other data needed to weigh the pros and cons of changes to space systems. Second, most changes being considered today will impact ground systems and user equipment, but these systems continue to be very troubled by cost and schedule overruns. Third, leadership for space acquisitions is still fragmented, which may hamper the implementation of changes, especially those that stretch across satellites, ground systems and user equipment. Space Situational Awareness Costs. According to Air Force Space Command, U.S. space systems face intentional and unintentional threats, which have increased rapidly over the past 20 years. These include radio frequency interference (including jamming), laser dazzling and blinding, kinetic intercept vehicles, and ground system attacks. Additionally, the hazards of the already-harsh space environment (for example, extreme temperature fluctuations and radiation) have increased, including numbers of active and inactive satellites, spent rocket bodies, and other fragments and debris. In response, recent government-wide and DOD-specific strategic and policy guidance have stressed the need for U.S. space systems to be survivable or resilient against such threats. The government relies primarily on DOD and the intelligence community to provide Space Situational Awareness (SSA)—the current and predictive knowledge and characterization of space objects and the operational environment upon which space operations depend—to provide critical data for planning, operating, and protecting space assets and to inform government and military operations. In October 2015, as mandated by the Senate Armed Services Committee, we reported on estimated costs of SSA efforts over the next 5 years. Specifically, we reported that the government’s planned unclassified budget for SSA core efforts—DOD, the National Aeronautics and Space Administration (NASA), and the National Oceanic and Atmospheric Administration (NOAA) operations of sensors, upgrades, and new developments—averages about $1 billion per year for fiscal years 2015 through 2020. Operations and payroll accounts for about 63 percent of the core budget during fiscal years 2015 through 2020, while investments for new sensors and systems, as well as upgrades for existing ones, account for the rest. Moreover, we could not report total costs since SSA is not the primary mission for many of the sensors that perform this mission. This is partly because DOD leverages systems that perform other missions to conduct SSA. This is a good practice since it reduces duplication and overlap but it makes accounting for SSA costs difficult. For example, missile defense sensors also perform SSA missions. The Missile Defense Agency has not determined what percentage of its budget for operating its missile defense sensors, which averages about $538 million per year over the next several years, would be allocated to the SSA mission. Moreover, these sensors would be procured by the Missile Defense Agency even if they were not involved in the SSA mission. Responsive Launch. In light of DOD’s dramatically increased demand and dependence on space capabilities and that operationally responsive low cost launch could assist in addressing such needs, DOD was required to report to the Congress on “responsive launch,” which generally means the ability to launch space assets to their intended orbits as the need arises, possibly to augment or reconstitute existing space capabilities. In October 2015, we reported that DOD did not yet have a consolidated plan for developing a responsive launch capability since there were no formal requirements for such a capability. DOD and contractor officials we spoke with also highlighted several potential challenges DOD faces as it pursues operationally responsive launch capabilities. For example, DOD officials told us that existing national security space program architectures (including payloads, ground systems, user equipment, and launch systems) may need to be modified to improve responsiveness, which could present challenges. That is, modifying one program could have repercussions for another, including changes to infrastructure and command and control elements. Further, while smaller, simpler satellites may require less time and effort to develop, build, and launch, a larger number of satellites may be needed to provide the same level of capability, and the transition from existing system designs could increase costs. DOD plans to validate future responsive launch requirements as it gains knowledge about emerging threats. Once this is done, having a single focal point for prioritizing and developing its responsive launch capabilities will be important, especially since different components of DOD already have ongoing efforts in place to develop responsive launch capabilities. Competitive Launch Acquisition. The Air Force is working to introduce competition into the Evolved Expendable Launch Vehicle (EELV) program. For almost 10 years, the EELV program had only one company capable of providing launches. In working to introduce competition into launch contracts, the Air Force is changing its acquisition approach for launch services, including the amount of cost and performance data that it plans to obtain under future launch contracts. Given these expected changes, the National Defense Authorization Act for Fiscal Year 2015 included a provision for us to examine the advisability of requiring that launch providers establish or maintain business systems that comply with the data requirements and cost accounting standards of the Department of Defense. The United Launch Alliance (ULA)—EELV’s incumbent provider—currently provides national security space launch services under a contract with cost-reimbursable provisions awarded using negotiated procedures. Under this type of contract, the Air Force is able to obtain from ULA cost and performance data from contractor business systems. The Air Force uses this business data for a variety of purposes, including monitoring contractor performance and identifying risks that could affect the program’s cost, schedule, or performance. However, for at least the first phase of future launches, the Air Force chose to change its acquisition approach to procure launch services as a commercial item using a firm-fixed-price contract, which will prevent the service from collecting business data at the same level of detail. As a result, the Air Force will have significantly less insight into program costs and performance than what it has under the current contract with ULA, though according to the Air Force the level of information gathered is sufficient for monitoring launch costs in a competitive, fixed-price environment. In August 2015, we reported that the acquisition approach chosen for the first competitive launches offers some benefits to the government, including increased competition, but it could limit program oversight and scheduling flexibility. The Air Force asserts that the use of full and open competitive procedures in a commercial item acquisition will increase the potential to keep more than one launch company viable. The Air Force’s use of commercial item contracts eliminates the need for contractors to develop the business systems associated with a cost-reimbursement contract and generally places greater responsibility upon the contractor for cost control. However, the Air Force has struggled with EELV program management and lack of oversight in the past, and removing the requirement for cost and performance data could leave it vulnerable to similar problems in the future in an uncertain commercial market. Also, the first competitive contracts may limit the Air Force’s flexibility in modifying its launch schedule, and schedule changes resulting from satellite production delays may result in added costs. Satellite delays have historically been an issue for the program, and the Air Force’s ability to modify the launch schedule is an important component of the current acquisition approach with ULA. We also reported that the Air Force is at risk of making decisions about future EELV acquisitions without sufficient knowledge. The Air Force plans to develop an acquisition strategy for the next phase of competitive launches before it has any actionable data from the first competitive launches. In addition, the Air Force views competition as crucial to the success of its new acquisition strategy, yet the viability of a competitive launch industry is uncertain. The launch industry is undergoing changes, and the ability of the domestic industry to sustain two or more providers in the long-term, while desirable, is unclear. Presently, there is only one company certified to compete with ULA for national security launches, and there are no other potential competitors in the near future. To adequately plan for future competitions and ensure informed decision making before committing to a strategy, it will be important for the Air Force to obtain knowledge about its new acquisition approach and on the launch industry. The Air Force concurred with our recommendation to ensure the next phases incorporate lessons learned. Purchases of commercial satellite bandwidth. DOD depends on commercial satellite communications (SATCOM) to support a variety of critical mission needs, from unmanned aerial vehicles and intelligence to voice and data for military personnel. Data from fiscal year 2011, the most recent information available, show that DOD spent over $1 billion leasing commercial SATCOM. In prior work, we found that some major DOD users of commercial SATCOM were dissatisfied with the Defense Information Systems Agency’s (DISA) acquisition process, seeing it as too costly and lengthy. These users also indicated that the contracts used were too inflexible. The Senate Armed Services Committee’s report accompanying a bill for the National Defense Authorization Act for Fiscal Year 2014 included a provision for DOD to provide a report detailing a 5-, 10-, and 25-year strategy for using a mix of DOD and commercial satellite bandwidth, and for us to review DOD’s acquisition strategy of the report, issued in August 2014. In July 2015, we reported that DOD procurement of SATCOM is fragmented and inefficient. DOD policy requires all of its components to procure commercial SATCOM through DISA but we found that some components were independently procuring SATCOM to meet their individual needs. DOD’s most recent SATCOM usage report estimates that over 30 percent of commercial SATCOM is bought independently by DOD components, even though DOD found the average cost of commercial SATCOM bought through DISA is about 16 percent lower than independently bought commercial SATCOM. Fragmentation such as this limits opportunities for DOD to bundle purchases, share services, and streamline its procurement of commercial SATCOM. DOD is taking steps to improve its SATCOM procurement and address challenges through “pathfinder” efforts aimed at identifying short- and long-term options. For example, DOD intends to study the potential benefits of using innovative contracting approaches as it procures military and commercial SATCOM, and refine its understanding of DOD’s global SATCOM requirements. However, it may be several years before DOD is able to evaluate the results of its pathfinder efforts. For example, all of the 10 pathfinders planned or already underway are expected to be completed in or beyond fiscal year 2017. DOD’s efforts to improve its procurement of military and commercial SATCOM will also be hampered by two long-standing challenges—lack of knowledge of what DOD is spending on commercial SATCOM and resistance to centralized management of SATCOM procurement. We reported on and made recommendations to improve both in 2003. Specifically, we recommended that DOD strengthen its capacity to provide accurate and complete analyses of commercial bandwidth spending and implement a strategic management framework for improving the acquisition of commercial bandwidth. DOD generally concurred with our 2003 recommendations and developed a plan to address them, but none of DOD’s corrective actions were carried out as intended. These challenges are commonly faced by organizations seeking to strategically source procurements of services, but our work has shown they can be overcome by employing best practices, to include conducting detailed spend analyses and centralized management of service procurements to identify procurement inefficiencies and opportunities. GPS Ground System and User Equipment. In 2009, we reported that development of space systems is not optimally aligned, and we recently noted that development of satellites often outpaces that of ground systems and user terminals (such as those on airplanes, ground vehicles, and ships), leading to underutilized on-orbit satellites and delays in getting new capabilities to end users. In some cases, gaps in delivery can add up to years, meaning that a satellite is launched but not effectively used for years until ground systems become available. The reasons for the gaps in the delivery of space system segments include funding instability, and poor acquisition management (requirements instability, underestimation of technical complexity, and poor contractor oversight). Our September 2015 report on GPS showed that these problems still persist. Specifically, we reported that the Air Force awarded the contract to begin GPS Next Generation Operational Control System (OCX) development— the command and control system for GPS III satellites—without following key acquisition practices such as completing a preliminary design review before development start as called for by best practices and generally required by statute. In addition, key requirements, particularly for cybersecurity, were not well understood by the Air Force and contractor at the time of contract award. The contractor, Raytheon, experienced significant software development challenges from the onset, but the Air Force consistently presented optimistic assessments of OCX progress to acquisition overseers. Further, the Air Force complicated matters by accelerating OCX development to better synchronize it with the projected completion time lines of the GPS III satellite program, but this resulted in disruptions to the OCX development effort. As Raytheon continued to struggle developing OCX, the program office paused development in late 2013 to fix what it believed were the root causes of the development issues, and significantly increased the program’s cost and schedule estimates. However, progress reports to DOD acquisition leadership continued to be overly optimistic relative to the reality of OCX problems. OCX issues appear to be persistent and systemic, raising doubts whether all root causes have been adequately identified, let alone addressed, and whether realistic cost and schedule estimates have been developed. Furthermore, since we reported in September 2015, the Under Secretary of Defense for Acquisition, Technology and Logistics has directed the OCX program to add an additional 24 months to its delivery schedule, increasing the delay to roughly 6 years from what was estimated at contract award. And some DOD officials believe the program could realistically need another 2 years beyond that before the first increment of the OCX ground system is delivered. We also reported that the Air Force revised the Military GPS User Equipment (MGUE) acquisition strategy several times in attempts to develop military-code (or M-code) capability—which can help users operate in jamming environments. Even so, the military services were unlikely to have sufficient knowledge about MGUE design and performance to make informed procurement decisions starting in fiscal year 2018 because it was uncertain whether an important design review would be conducted prior to that time and because operational testing would still be under way. Again, GPS is not the only program where we have seen these types of problems. AEHF and the Mobile User Objective System have encountered significant delays with the delivery of user equipment and the SBIRS High ground system was not fully completed when satellites were launched. Moreover, we have reported that these challenges could intensify with the potentially larger numbers and novel configurations of satellites, payloads, and other components of a disaggregated approach. Analysis of Alternatives for Weather Systems. DOD has been conducting analyses of alternatives (AOA) to assist in deciding what space assets should be acquired for its missile warning, protected communications and environmental monitoring (weather) missions. AOAs provide insight into the technical feasibility and costs of alternatives and can carry significant weight in the decision-making process, in part because they involve participation and oversight by a diverse mix of military, civilian, and contractor personnel. We testified last year that the time frames for making decisions about the way forward are narrowing, and if not made in time, DOD may be forced to continue with existing approaches for its next systems. As of today, only the weather AOA has been completed and approved by DOD. We were required by the National Defense Authorization Act for Fiscal Year 2015 to review this particular AOA. We are currently in the process of completing this review and expect to issue our final report in mid-March 2016. Our preliminary findings are that the AOA provided thorough analysis of some of the 12 capabilities identified for the assessment, but ineffective coordination with NOAA, among other issues, imposed limitations on the analysis of the two highest-priority capabilities—cloud characterization and theater weather imagery. Specifically, DOD did not employ a formal collaboration mechanism that identified roles and responsibilities for DOD and NOAA in conducting the AOA, which contributed to DOD making an incorrect assumption about the continued availability of critical data from European partner satellites. As a result, the two capabilities were not as thoroughly analyzed for potential solutions, and they are now being re-assessed outside of the AOA process as near-term gaps approach. We plan to recommend that DOD ensure the leads of future planning efforts establish formal mechanisms for coordination and collaboration with NOAA that specify roles and responsibilities to ensure accountability for both agencies. DOD concurred with this recommendation in its review of our draft report. A positive aspect of the weather AOA was that DOD took a relatively new approach to analyzing alternatives with cost-efficiency in mind, including considering which capabilities DOD needed to provide and which could be provided by leveraging other sources of data. This should help DOD find cost-effective ways for meeting some of its needs. Space Leadership. The DOD’s space acquisition portfolio has numerous stakeholders, including each of the military services; intelligence community organizations; research agencies; multiple DOD headquarters offices; civil government agencies; and the Executive Office of the President. Over more than the last 15 years, we have noted—along with congressional committees, and various commissions and reviews—concern about the fragmented nature of DOD’s space system acquisition processes and acquisition oversight. In September 2015, we began a review based on language in the Senate Report accompanying a bill for the National Defense Authorization Act for Fiscal Year 2016 which is looking at: (1) how DOD’s management and oversight of space system acquisitions are structured; (2) whether past recommendations for improving this structure have been implemented; and (3) what challenges, if any, result from this current structure. Our preliminary findings indicate that the structure of space system acquisitions and oversight continues to be complicated. It involves a large number of stakeholders, and there is no single individual, office, or entity in place that provides oversight for the overall space program acquisition structure. A number of commissions and study groups have recommended substantive changes to the way the government plans for, acquires, and manages space systems, including centralizing planning and decision-making authority for space systems and establishing oversight authority outside the Air Force. Additionally, various DOD officials and experts that we spoke with noted other problems with the process of acquiring and managing space systems, including long acquisition timelines and extensive review processes, decision-making authority being at too high a level, and little long-term planning or system architecture work. DOD does point to a recent change in DOD’s organizational structure for space programs that attempts to mitigate these problems. The Deputy Secretary of Defense designated the Secretary of the Air Force as the Principal DOD Space Advisor, with responsibility for overseeing all defense space policies, strategies and plans, and serving as an independent advisor on all space matters to the Secretary of Defense and other DOD leadership. This is a new position and its responsibilities are still being fully established according to DOD officials; however at this point it is too early to tell whether this position will have sufficient enforcement authority and the extent to which it will address the leadership problems raised in the past. Our reviews in recent years have made a number of recommendations aimed at putting DOD on a better footing as it considers and implements significant changes for space programs. For example, we recommended that when planning for the next phase of competition for launches, the Air Force use an incremental approach to the next acquisition strategy to ensure that it does not commit itself to a strategy until data is available to make an informed decision. For purchases of commercial bandwidth, we recommended that DOD conduct a spend analysis identifying procurement inefficiencies and opportunities; and assess whether further centralization of commercial SATCOM procurement could be beneficial. DOD concurred. It is too early to determine the extent to which DOD will implement these and other recommendations made this year but we have seen considerable efforts to address recommendations from other reports. For instance, in 2013, we recommended that future DOD satellite acquisition programs be directed to determine a business case for proceeding with either a dedicated or shared network for that program’s satellite control operations and develop a department-wide long-term plan for modernizing its Air Force Satellite Control Network and any future shared networks and implementing commercial practices to improve DOD satellite control networks. DOD has taken initial steps toward making a significant transformation in its satellite control operations. We look forward to assessing its plans in the near future in response to a mandate from this Committee. As noted earlier, we have also made numerous recommendations related to acquisition management and our ongoing review of space leadership will highlight what past recommendations may still be worth addressing. Overall, it is exceedingly important that DOD address acquisition governance and management problems in the near future. Work is already underway on recapitalizing the space portfolio, yet fiscal constraints and past problems have limited resources available for new programs. Moreover, protecting space assets will likely require more investments as well as more effective coordination. Chairman Sessions and Ranking Member Donnelly, this concludes my statement for the record. For further information about this statement, please contact Cristina Chaplain at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Rich Horiuchi, Assistant Director; Claire Buck; Maricela Cherveny; Alyssa Weir; Emily Bond; and Oziel Trevino. Key contributors for the previous work on which this statement is based are listed in the products cited. Key contributors to related ongoing work include Raj Chitikila; Laura Hook; Andrea Evans; Brenna Guarneros; Krista Mantsch; and James Tallon. Space Acquisitions: GAO Assessment of DOD Responsive Launch Report. GAO-16-156R. Washington, D.C.: October 29, 2015. Space Situational Awareness: Status of Efforts and Planned Budgets. GAO-16-6R. Washington, D.C.: October 8, 2015. GPS: Actions Needed to Address Ground System Development Problems and User Equipment Production Readiness. GAO-15-657. Washington, D.C.: September 9, 2015. Evolved Expendable Launch Vehicle: The Air Force Needs to Adopt an Incremental Approach to Future Acquisition Planning to Enable Incorporation of Lessons Learned. GAO-15-623. Washington, D.C.: August 11, 2015. Defense Satellite Communications: DOD Needs Additional Information to Improve Procurements. GAO-15-459. Washington, D.C.: July 17, 2015. Space Acquisitions: Some Programs Have Overcome Past Problems, but Challenges and Uncertainty Remain for the Future. GAO-15-492T. Washington, D.C.: April 29, 2015. Space Acquisitions: Space Based Infrared System Could Benefit from Technology Insertion Planning. GAO-15-366. Washington, D.C.: April 2, 2015. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-15-342SP. Washington, D.C.: March 12, 2015. Defense Major Automated Information Systems: Cost and Schedule Commitments Need to Be Established Earlier. GAO-15-282. Washington, D.C.: February 26, 2015. DOD Space Systems: Additional Knowledge Would Better Support Decisions about Disaggregating Large Satellites. GAO-15-7. Washington, D.C.: October 30, 2014. Space Acquisitions: Acquisition Management Continues to Improve but Challenges Persist for Current and Future Programs. GAO-14-382T. Washington, D.C.: March 12, 2014. U.S. Launch Enterprise: Acquisition Best Practices Can Benefit Future Efforts. GAO-14-776T. Washington, D.C.: July 16, 2014. Evolved Expendable Launch Vehicle: Introducing Competition into National Security Space Launch Acquisitions. GAO-14-259T. Washington, D.C.: March 5, 2014 The Air Force’s Evolved Expendable Launch Vehicle Competitive Procurement. GAO-14-377R. Washington, D.C.: March 4, 2014. 2014 Annual Report: Additional Opportunities to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-14-343SP. Washington, D.C.: April 8, 2014. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-14-340SP. Washington, D.C.: March 31, 2014. Space Acquisitions: Assessment of Overhead Persistent Infrared Technology Report. GAO-14-287R. Washington, D.C.: January 13, 2014. Space: Defense and Civilian Agencies Request Significant Funding for Launch-Related Activities. GAO-13-802R. Washington, D.C.: September 9, 2013. Global Positioning System: A Comprehensive Assessment of Potential Options and Related Costs is Needed. GAO-13-729. Washington, D.C.: September 9, 2013. Space Acquisitions: DOD Is Overcoming Long-Standing Problems, but Faces Challenges to Ensuring Its Investments are Optimized. GAO-13-508T. Washington, D.C.: April 24, 2013. Launch Services New Entrant Certification Guide. GAO-13-317R. Washington, D.C.: February 7, 2013. Satellite Control: Long-Term Planning and Adoption of Commercial Practices Could Improve DOD’s Operations. GAO-13-315. Washington, D.C.: April 18, 2013. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-13-294SP. Washington, D.C.: March 28, 2013. Evolved Expendable Launch Vehicle: DOD Is Addressing Knowledge Gaps in Its New Acquisition Strategy. GAO-12-822. Washington, D.C.: July 26, 2012. Space Acquisitions: DOD Faces Challenges in Fully Realizing Benefits of Satellite Acquisition Improvements. GAO-12-563T. Washington, D.C.: March 21, 2012. Space Acquisitions: DOD Delivering New Generations of Satellites, but Space System Acquisition Challenges Remain. GAO-11-590T. Washington, D.C.: May 11, 2011. Space Acquisitions: Development and Oversight Challenges in Delivering Improved Space Situational Awareness Capabilities. GAO-11-545. Washington, D.C.: May 27, 2011. Space and Missile Defense Acquisitions: Periodic Assessment Needed to Correct Parts Quality Problems in Major Programs. GAO-11-404. Washington, D.C.: June 24, 2011. Global Positioning System: Challenges in Sustaining and Upgrading Capabilities Persist. GAO-10-636. Washington, D.C.: September 15, 2010. Defense Acquisitions: Challenges in Aligning Space System Components. GAO-10-55. Washington D.C.: October 29, 2009. Satellite Communications: Strategic Approach Needed for DOD’s Procurement of Commercial Satellite Bandwidth. GAO-04-206. Washington D.C.: December 10, 2003. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
DOD is shifting its traditional approach to space acquisitions, bolstering its protection of space systems, and engaging with more commercial providers. Given the time and resource demands of DOD's space systems and today's budget environment, challenges that hinder these transitions must be addressed. This statement focuses on (1) the current status and cost of major DOD space system acquisitions, and (2) challenges and barriers DOD faces in addressing future space-based mission needs. This statement highlights the results of GAO's work on space acquisitions over the past year and presents preliminary observations from ongoing work. We obtained comments from DOD on a draft of preliminary findings contained in this statement. Most major space programs have experienced significant cost and schedule increases. For instance, program costs for the Advanced Extremely High Frequency satellite program, a protected satellite communications system, have grown 116 percent as of our latest review, and its first satellite was launched more than 3 years late. For the Space Based Infrared System High, a missile warning satellite program, costs grew almost 300 percent and its first satellite was launched roughly 9 years late. Last year, we reported that contract costs for the Global Positioning System (GPS) ground system, designed to control on-orbit GPS satellites, had more than doubled and the program had experienced a 4-year delay. The delivery of that ground system is now estimated to be delayed another 2 years, for a cumulative 6-year delay. Some DOD officials say even that is an optimistic timeline. Though steps have been taken to improve acquisition management in space, problems with GPS show that much more work is needed, especially since DOD is considering going in new directions for space programs. Right now, DOD is at a crossroads for space. Fiscal constraints and increasing threats—both environmental and adversarial—to space systems have led DOD to consider alternatives for acquiring and launching space-based capabilities, such as: disaggregating large satellites into multiple, smaller satellites or payloads; relying on commercial satellites to host government payloads; and procuring certain capabilities, such as bandwidth and ground control, as services instead of developing and deploying government-owned networks or spacecraft. This year, GAO's work on space acquisitions continued to show that DOD faces several major challenges as it undertakes efforts to change its approaches to space acquisitions. Our work assessed a range of issues including DOD's analysis supporting its decisions on future weather satellites, space leadership, and the introduction of competition into space launch acquisitions. These and other studies surfaced several challenges: First, though DOD is conducting analyses of alternatives to support decisions about the future of space programs, there are gaps in cost and other data needed to weigh the pros and cons of changes to space systems. Second, most changes being considered today will impact ground systems and user equipment, but these systems continue to be troubled by management and development issues. Third, leadership for space acquisitions is still fragmented, which will likely hamper the implementation of new acquisition approaches, especially those that stretch across satellites, ground systems and user equipment. Past GAO reports have generally recommended that DOD adopt best practices. DOD has generally agreed and taken actions to address these recommendations. Consequently, GAO is not making any recommendations in this statement.
Wildland fires are both natural and inevitable and play an important ecological role on the nation’s landscapes. These fires have long shaped the composition of forests and grasslands, periodically reduced vegetation densities, and stimulated seedling regeneration and growth in some species. Wildland fires can be ignited by lightning or by humans either accidentally or intentionally. As we have described in previous reports, however, various land use and management practices over the past century—including fire suppression, grazing, and timber harvesting—have reduced the normal frequency of fires in many forest and rangeland ecosystems. These practices contributed to abnormally dense, continuous accumulations of vegetation, which in turn can fuel uncharacteristically severe wildland fires in certain ecosystems. According to scientific reports, several other factors have contributed to overall changes to ecosystems and the landscapes on which they depend, altering natural fire regimes and contributing to an increased frequency or intensity of wildland fire in some areas. For example, the introduction and spread of highly flammable invasive nonnative grasses, such as cheatgrass, along with the expanded range of certain flammable native species, such as western juniper, in the Great Basin region of the western United States—including portions of California, Idaho, Nevada, Oregon, and Utah— have increased the frequency and intensity of fire in the sagebrush steppe ecosystem. Changing climate conditions, including drier conditions in certain parts of the country, have increased the length and severity of wildfire seasons, according to many scientists and researchers. For example, in the western United States, the average number of days in the fire season has increased from approximately 200 in 1980 to approximately 300 in 2013, according to the 2014 Quadrennial Fire Review. In Texas and Oklahoma this increase was even greater, with the average fire season increasing from fewer than 100 days to more than 300 during this time. According to the U.S. Global Change Research Program’s 2014 National Climate Assessment, projected climate changes suggest that western forests in the United States will be increasingly affected by large and intense fires that occur more frequently. Figure 1 shows the wildfire hazard potential across the country as of 2014. In addition, development in the wildland-urban interface (WUI) has continued to increase over the last several decades, increasing wildland fire’s risk to life and property. According to the 2014 Quadrennial Fire Review, 60 percent of new homes built in the United States since 1990 were built in the WUI, and the WUI includes 46 million single-family homes and an estimated population of more than 120 million. In addition to increased residential development, other types of infrastructure are located in the WUI, including power lines, campgrounds and other recreational facilities, communication towers, oil and gas wells, and roads. Some states, such as New Mexico and Wyoming, have experienced significant increases in oil and gas development over the past decade, adding to the infrastructure agencies may need to protect. Under the National Forest Management Act and the Federal Land Policy and Management Act of 1976, respectively, the Forest Service and BLM manage their lands for multiple uses such as protection of fish and wildlife habitat, forage for livestock, recreation, timber harvesting, and energy production. FWS and NPS manage federal lands under legislation that primarily calls for conservation; management for activities such as harvesting timber for commercial use is generally precluded. BIA is responsible for the administration and management of lands held in trust by the United States for Indian tribes, individuals, and Alaska Natives. These five agencies manage about 700 million surface acres of land in the United States, including national forests and grasslands, national wildlife refuges, national parks, and Indian reservations. The Forest Service and BLM manage the majority of these lands. The Forest Service manages about 190 million acres; BLM manages about 250 million acres; and BIA, FWS, and NPS manage 55, 89, and 80 million acres, respectively. Figure 2 shows the lands managed by each of these five agencies. Severe wildland fires and the vegetation that fuels them may cross the administrative boundaries of the individual federal land management agencies or the boundaries between federal and nonfederal lands. State forestry agencies and other entities—including tribal, county, city, and rural fire departments—share responsibility for protecting homes and other private structures and have primary responsibility for managing wildland fires on nonfederal lands. Most of the increased development in the WUI occurs on nonfederal lands, and approximately 70,000 communities nationwide are considered to be at high risk from wildland fire. Some of these communities have attempted to reduce risk of wildland fire through programs aimed at improving fire risk awareness and promoting steps to reduce their risk, such as the Firewise Communities program. Wildland fire management consists of three primary components: preparedness, suppression, and fuel reduction. Preparedness. To prepare for a wildland fire season, the five land management agencies acquire firefighting assets—including firefighters, fire engines, aircraft, and other equipment—and station them either at individual federal land management units or at centralized dispatch locations in advance of expected wildland fire activity. The primary purpose of acquiring these assets is to respond to fires before they become large—a response referred to as initial attack. The agencies fund the assets used for initial attack primarily from their wildland fire preparedness accounts. Suppression. When a fire starts, interagency policy calls for the agencies to consider land management objectives—identified in land and fire management plans developed by each land management unit—and the structures and resources at risk when determining whether or how to suppress the fire. A wide spectrum of strategies is available to choose from, and the land manager at the affected local unit is responsible for determining which strategy to use—from conducting all-out suppression efforts to monitoring fires within predetermined areas in order to provide natural resource benefits. When a fire is reported, the agencies are to follow a principle of closest available resource, meaning that, regardless of jurisdiction, the closest available firefighting equipment and personnel respond. In instances when fires escape initial attack and grow large, the agencies respond using an interagency system that mobilizes additional firefighting assets from federal, state, and local agencies, as well as private contractors, regardless of which agency or agencies have jurisdiction over the burning lands. The agencies use an incident management system under which specialized teams are mobilized to respond to wildland fires, with the size and composition of the team determined by the complexity of the fire. Federal agencies typically fund the costs of these activities from their wildland fire suppression accounts. Fuel reduction. Fuel reduction refers to agencies’ efforts to reduce potentially hazardous vegetation that can fuel fires, such as brush and “ladder fuels” (i.e., small trees and other vegetation that can carry fire vertically to taller vegetation such as large trees), in an effort to reduce the potential for severe wildland fires, lessen the damage caused by fires, limit the spread of flammable invasive species, and restore and maintain healthy ecosystems. The agencies use multiple approaches for reducing this vegetation, including setting fires under controlled conditions (prescribed burns), mechanical thinning, herbicides, certain grazing methods, or combinations of these and other approaches. The agencies typically fund these activities from their fuel reduction accounts. Risk is an inherent element of wildland fire management. Federal agencies acknowledge this risk, and agency policies emphasize the importance of managing their programs accordingly. For example, Forest Service guidance states that “the wildland fire management environment is complex and possesses inherent hazards that can—even with reasonable mitigation—result in harm.” According to a 2013 Forest Service report on decision making for wildfires, risk management is to be applied at all levels of wildfire decision making, from the individual firefighter on the ground facing changing environmental conditions to national leaders of the fire management agencies weighing limited budgets against increasingly active fire seasons. For example, the report explains that, during individual wildland fires, risk can be defined as “a function of values, hazards, and probability.” Congress, the Office of Management and Budget, federal agency officials, and others have raised questions about the growing cost of federal wildland fire management. According to a 2015 report by Forest Service researchers, for example, the amount the Forest Service spends on wildland fire management has increased from 17 percent of the agency’s total funds in 1995 to 51 percent of funds in 2014. The report noted that this has come at the cost of other land management programs within the agency, such as vegetation and watershed management, some of which support activities intended to reduce future wildfire damage. From fiscal years 2004 through 2014, the Forest Service and Interior agencies obligated $14.9 billion for suppression, $13.4 billion for preparedness, and $5.7 billion for fuel reduction. Figure 3 shows the agencies’ total obligations for these three components of wildland fire management for fiscal years 2004 through 2014. After receiving its annual appropriation, the Forest Service allocates preparedness and fuel reduction funds to its nine regional offices, and those offices in turn allocate funds to individual field units (national forests and grasslands). Interior’s Office of Wildland Fire, upon receiving its annual appropriation, allocates preparedness and fuel reduction funds to BIA, BLM, FWS, and NPS. These agencies then allocate funds to their regional or state offices, which in turn allocate funds to individual field units (e.g. national parks or national wildlife refuges). The Forest Service and Interior agencies do not allocate suppression funding to their regions. These funds are managed at the national level. Federal wildland fire management policy has evolved over the past century in response to changing landscape conditions and greater recognition of fire’s role in maintaining resilient and healthy ecosystems. According to wildland fire historians, in the late 1800s and early 1900s, the nation experienced a series of large and devastating fires that burned millions of acres, including highly valued timber stands. In May 1908, federal legislation authorized the Forest Service to use any of its appropriations to fight fires. During the following decades, the Forest Service and Interior agencies generally took the view that fires were damaging and should be suppressed quickly, with policies and practices evolving gradually. For example, in 1935, the Forest Service issued the “10 a.m. policy,” which stated that whenever possible, every fire should be contained by 10 a.m. on the day after it was reported. In more remote areas, suppression policies had minimal effect until fire towers, lookout systems, and roads in the 1930s facilitated fire detection and fire deployment. The use of aircraft to drop fire retardants—that is, chemicals designed to slow fire growth—began in the 1950s, according to agency documents. Subsequent to the introduction of the 10 a.m. policy, some changes to agency policies lessened the emphasis on suppressing all fires, as some federal land managers took note of the unintended consequences of suppression and took steps to address those effects. In 1943, for example, the Chief of the Forest Service permitted national forests to use prescribed fire to reduce fuels on a case-by-case basis. In 1968, NPS revised its fire policy, shifting its approach from suppressing all fires to managing fire by using prescribed burning and allowing fires started by lightning to burn in an effort to accomplish approved management objectives. In 1978, the Forest Service revised its policy to allow naturally ignited fires to burn in some cases, and formally abandoned the 10 a.m. policy. Two particularly significant fire events—the Yellowstone Fires of 1988, in which approximately 1.3 million acres burned, and the South Canyon Fire of 1994, in which 14 firefighters lost their lives—led the agencies to fundamentally reassess their approach to wildland fire management and develop the Federal Wildland Fire Management Policy of 1995. Under the 1995 policy, the agencies continued to move away from their emphasis on suppressing every wildland fire, seeking instead to (1) make communities and resources less susceptible to being damaged by wildland fire and (2) respond to fires so as to protect communities and important resources at risk while considering both the cost and long-term effects of that response. The policy was reaffirmed and updated in 2001, and guidance for its implementation was issued in 2003 and 2009. In 2000, after one of the worst wildland fire seasons in 50 years, the President asked the Secretaries of Agriculture and the Interior to submit a report on managing the impact of wildland fires on communities and the environment. The report, along with congressional approval of increased appropriations for wildland fire management for fiscal year 2001, as well as other related activities, formed the basis of what is known as the National Fire Plan. The National Fire Plan emphasized the importance of reducing the buildup of hazardous vegetation that fuels severe fires, stating that unless hazardous fuels are reduced, the number of severe wildland fires and the costs associated with suppressing them would continue to increase. In 2003, Congress passed the Healthy Forests Restoration Act, with the stated purpose of, among other things, reducing wildland fire risk to communities, municipal water supplies, and other at- risk federal land through a collaborative process of planning, setting priorities for, and implementing fuel reduction projects. Along with the development of policies governing their responses to fire, the agencies developed a basic operational framework within which they manage wildland fire incidents. For example, to respond to wildland fires affecting both federal and nonfederal jurisdictions, firefighting entities in the United States have, since the 1970s, used an interagency incident management system. This system provides an organizational structure that expands to meet a fire’s complexity and demands, and allows entities to share firefighting personnel, aircraft, and equipment. Incident commanders who manage the response to each wildland fire may order firefighting assets through a three-tiered system of local, regional, and national dispatch centers. Federal, tribal, state, and local entities and private contractors supply the firefighting personnel, aircraft, equipment, and supplies which are dispatched through these centers. The agencies continue to use this framework as part of their approach to wildland fire management. Since 2009, the five federal agencies have made several changes in their approach to wildland fire management. The agencies have issued fire management guidance which, among other things, gave their managers greater flexibility in responding to wildland fires by providing for responses other than full suppression of fires. In collaboration with nonfederal partners such as tribal and state governments, they have also developed a strategy aimed at coordinating federal and nonfederal wildland fire management activities around common goals, such as managing landscapes for resilience to fire-related disturbances. In addition, Interior, and BLM in particular, have placed a greater emphasis on wildland fire management efforts in the sagebrush steppe ecosystem by issuing guidance and developing strategies aimed at improving the condition of this landscape. The agencies have also taken steps to change other aspects of wildland fire management, including changes related to improving fire management technology, line officer training, and firefighter safety. Agency officials told us the agencies are moving toward a more risk-based approach to wildland fire management. The extent to which the agencies’ actions have resulted in on-the-ground changes varied across agencies and regions, however, and officials identified factors, such as proximity to populated areas, that may limit their implementation of some of these actions. The agencies have increased their emphasis on using wildland fire to provide natural resource benefits rather than seeking to suppress all fires, in particular through issuing the 2009 Guidance for Implementation of Federal Wildland Fire Management Policy. Compared with interagency guidance issued in 2003, the 2009 guidance provided greater flexibility to managers in responding to wildland fire to achieve natural resource benefits for forests and grasslands, such as reducing vegetation densities and stimulating regeneration and growth in some species. The 2003 guidance stated that only one “management objective” could be applied to a single wildland fire—meaning that wildland fires could either be managed to meet suppression objectives or managed for continued burning to provide natural resource benefits, but not both. The 2003 guidance also restricted a manager’s ability to switch between full suppression and management for natural resource benefits, even when fire conditions changed. In contrast, under the 2009 interagency guidance, managers may manage individual fires for multiple objectives, and may change the management objectives on a fire as it spreads across the landscape. For example, managers may simultaneously attempt to suppress part of a fire that is threatening infrastructure or valuable resources while allowing other parts of the same fire to burn to achieve desired natural resource benefits. According to agency documents, the 2009 guidance was intended to reduce barriers to risk- informed decision making, allowing the response to be more commensurate with the risk posed by the fire, the resources to be protected, and the agencies’ land management objectives. However, agency officials varied in their opinions about the extent to which this guidance changed their management practices, with some telling us it marked a departure from their past practices, and others telling us it did not significantly change the way they managed wildland fire. Several headquarters and regional agency officials told us the guidance improved managers’ ability to address natural resource needs when managing a fire, rather than simply suppressing all fires. For example, BIA officials told us that the flexibility provided through the guidance allowed managers on the San Carlos Apache Reservation in southeastern Arizona to use a variety of management strategies to manage the 2014 Skunk Fire. According to a BIA fire ecologist, managers were able to maximize firefighter safety while fostering desirable ecological benefits, including helping to restore the historical fire regime to the area. In addition, Forest Service officials from several regions, including the Rocky Mountain and Intermountain Regions, told us they have used the full range of management options in the guidance more frequently over the last 5 years, and they credited the 2009 guidance for giving them the ability to manage fires and their associated risks. For example, during the 2011 Duckett Fire on the Pike-San Isabel National Forests in Colorado, managers attempted to contain part of the fire to protect a subdivision while allowing the portion of the fire uphill from the subdivision to burn into wilderness. Officials told us that, prior to the 2009 guidance, they would likely have responded to this fire by attempting full suppression, which could have put firefighters at risk at the upper part of the fire because of the steep and rugged terrain. In contrast, other officials told us the effect of the guidance was minimal because certain factors—including proximity to populated areas, size of the land management unit, and concerns about resources necessary to monitor fires—limit their ability to manage wildland fire incidents for anything other than suppression. For example, Forest Service officials from the Eastern Region told us that they try to use fire to provide natural resource benefits where possible, but they have fewer opportunities for doing so because of the smaller size of Forest Service land units in this region, which makes it more likely the fires will cross into nonfederal land, and their proximity to many areas of WUI. Similarly, Forest Service officials from the Pacific Southwest Region told us they are limited in using the added flexibility provided through the 2009 interagency guidance in Southern California, in part because the forests there are so close to major cities. However, in other more remote areas of California, these officials said they have managed wildland fires concurrently for one or more objectives, and objectives can change as the fire spreads across the landscape. Officials from BLM’s Utah State Office also told us that their changed landscape is a limiting factor in responding to wildland fire. Specifically, cheatgrass, a nonnative, highly flammable grass, has replaced much of the native vegetation of the sagebrush steppe ecosystem that used to exist on the lands they manage in western Utah. As a result, introducing fire into this area could be detrimental rather than helpful because cheatgrass’s flammability makes fires difficult to control. Several officials also told us that managing wildland fires for objectives beyond full suppression, as provided for in the 2009 guidance, is highly dependent on circumstance. Officials told us that allowing fires to burn requires the agencies to devote assets to monitoring the fires to prevent them from escaping, which—especially for long-duration fires—can reduce the assets available to respond to other fires that may occur. For example, in 2012, in response to what it predicted to be an expensive and above-normal fire season, the Forest Service issued guidance to its regions limiting the use of any strategy other than full suppression (i.e., any strategy that involved allowing fires to burn for natural resource benefits) for the remainder of that year. The Forest Service noted that it was issuing this guidance because of concerns about committing the assets necessary to monitor long-duration fires that were allowed to burn in order to provide natural resource benefits. In 2015, during the Thunder Creek fire in North Cascades National Park, concerns about the resources needed to monitor the fire if it were allowed to burn to provide natural resource benefits led NPS managers instead to order full suppression efforts to help ensure that the resources would be available for other fires. In a press release about the fire, NPS noted that experts anticipated a very high potential for wildfire in 2015, leading to agency concerns that significant fire activity throughout the west could leave few available firefighting resources later in the season. Another change since 2009 was the completion in 2014 of the National Cohesive Wildland Fire Management Strategy (Cohesive Strategy), developed in collaboration with partners from multiple jurisdictions (i.e., tribal, state, and local governments, nongovernmental partners, and public stakeholders) and aimed at coordinating wildland fire management activities around common wildland fire management goals. The agencies have a long history of collaboration with nonfederal partners in various aspects of wildland fire management, including mobilizing firefighting resources during wildland fire incidents and conducting fuel reduction projects across jurisdictions. The Cohesive Strategy is intended to set broad, strategic, nationwide direction for such collaboration. Specifically, the Cohesive Strategy provides a nationwide framework designed to more fully integrate fire management efforts across jurisdictions, manage risks, and protect firefighters, property, and landscapes by setting “broad, strategic, and national-level direction as a foundation for implementing actions and activities across the nation.” The vision of the Cohesive Strategy is “to safely and effectively extinguish fire, when needed; use fire where allowable; manage our natural resources; and as a nation, live with wildland fire.” The Cohesive Strategy identified three goals: (1) landscapes across all jurisdictions are resilient to fire-related disturbances in accordance with management objectives; (2) human populations and infrastructure can withstand wildfire without loss of life or property; and (3) all jurisdictions participate in developing and implementing safe, effective, and efficient risk-based wildfire management decisions. According to a senior Forest Service official, the Wildland Fire Leadership Council is responsible for providing a national, intergovernmental platform for implementing the strategy. In September 2014, an interim National Cohesive Strategy Implementation Task Group completed an implementation framework that included potential roles, responsibilities, and membership for a “national strategic committee” that is intended to provide oversight and leadership on implementing the strategy. Agency officials differed in the extent to which they viewed the Cohesive Strategy as having a significant effect on their wildland fire management activities. On the one hand, several headquarters and regional agency officials told us the Cohesive Strategy has improved wildland fire management. For example, Forest Service officials from the Southern Region told us the Cohesive Strategy has reinforced existing work that better enabled them to collaborate on new projects, which they told us is important because nearly 85 percent of the land base in the region is privately owned, and little could be achieved without collaboration. Forest Service officials cited one instance in which they signed a regional level agreement that will cover several state chapters of The Nature Conservancy to exchange resources for fuel reduction treatment and to promote public understanding of its benefits—an action they said was supported by the Cohesive Strategy. Similarly, Forest Service officials from the Intermountain Region told us about several efforts that have been implemented across their region that they attribute to the Cohesive Strategy. For example, in 2014, the Forest Service, the state of Utah, and other stakeholders collaborated on the implementation of Utah’s Catastrophic Wildfire Reduction Strategy, which aims to identify where fuel treatment across the state would be most beneficial. In contrast, many officials told us they have collaborated with partners for years and did not find the additional direction provided through the Cohesive Strategy to be much different than how they already operated. For example, several regional BLM, FWS, and NPS officials told us they have long worked with nonfederal partners on issues related to wildland fire management and that the Cohesive Strategy did not change those relationships. However, implementation of collaborative actions stemming from the Cohesive Strategy may be limited by such factors as differences in laws and policies among federal, tribal, state, and local agencies. For example, while the 2009 federal interagency guidance provided federal managers with additional flexibility in managing a single fire for multiple purposes, laws and regulations at the state and local levels typically require full suppression of all fires, according to the 2014 Quadrennial Fire Review. For example, according to California state law, state forest officials in California are “charged with the duty of preventing and extinguishing forest fires.” Since 2009, Interior and BLM have placed a greater emphasis on wildland fire management, restoration, and protection related to the sagebrush steppe ecosystem—particularly with respect to habitat for the greater sage-grouse. Several changes, including urbanization and increased infrastructure built in support of various activities (e.g., roads and power lines associated with oil, gas, or renewable energy projects), have altered the sagebrush steppe ecosystem in the Great Basin region of the western United States. In addition, the introduction and spread of highly flammable invasive nonnative grasses such as cheatgrass have altered this ecosystem by increasing the frequency and intensity of fire. As of July 2015, FWS was evaluating whether to list the greater sage- grouse, a species reliant on the sagebrush steppe ecosystem, as a threatened and endangered species under the Endangered Species Act. FWS has noted the importance of fire and fuel management activities in reducing the threat to sage-grouse habitat. Beginning in 2011, BLM issued guidance to its state offices emphasizing the importance of sage-grouse habitat in fire operations and the need for fuel reduction activities to address concerns about the habitat, more than half of which is located on BLM-managed lands. In 2014, the agency issued guidance reiterating this importance and stating that it would make changes in funding to allow field units to place greater focus on reducing fire’s threats in sage-grouse habitat areas. In January 2015, the Secretary of the Interior issued a Secretarial Order to enhance policies and strategies “for preventing and suppressing rangeland fire and for restoring sagebrush landscapes impacted by fire across the West.” The order established the Rangeland Fire Task Force and directed it to, among other things, complete a report on activities to be implemented ahead of the 2016 Western fire season. Under the order, the task force also was to address longer term actions to implement the policy and strategy set forth by the order. In a report issued in May 2015, An Integrated Rangeland Fire Management Strategy, the task force called for prepositioning firefighting assets where priority sage-grouse habitat exists, including moving assets from other parts of the country as available. The goal is to improve preparedness and suppression capability during initial stages of a wildfire to increase the chances of keeping fires small and reduce the loss of sage-grouse habitat. The report also identified actions aimed at improving the targeting of fuel reduction activities, including identifying priority landscapes and fuel management priorities within those landscapes. These actions are to be completed by the end of September 2015 and continuously improved upon in subsequent years. According to BLM state officials, the increased emphasis on sage-grouse habitat will significantly change how they manage their fuel reduction programs. BLM officials from states that include sage-grouse habitat said they expect a large increase in fuel reduction treatment funding and increased project approvals. In contrast, BLM officials from states without this habitat told us they expect significant funding decreases, limiting their capacity to address other resource issues important for nonsagebrush ecosystems. Since 2009, the agencies also have taken steps to change other areas of wildland fire management, including technology for wildland fire planning and response, line-officer training, and firefighter safety. Since 2009, the agencies have applied new technologies to improve wildland fire management planning and response. Prominent among them is the Wildland Fire Decision Support System (WFDSS), a Web- based decision-support tool that assists fire managers and analysts in making strategic and tactical decisions for fire incidents. WFDSS replaced older tools, some of which had been used for more than 30 years and were not meeting current fire management needs, according to the system’s website. According to this site, WFDSS has several advantages over the older systems, such as enabling spatial data layering, increasing use of map displays, preloading information about field units’ management objectives, and allowing for use in both single and multiple fire situations. Officials from several agencies told us that using WFDSS improved their ability to manage fires by allowing information from fire management plans to be loaded into WFDSS and providing substantial real-time fire information on which to make decisions. For example, one Forest Service official told us that, at one point in a recent particularly active fire season in the Pacific Northwest Region, the system processed information on approximately 20 concurrent fires that managers could monitor in real time. As a result, they were able to make strategic and risk-informed decisions about the resource allocations needed for each fire, including decisions to let some fires burn to meet natural resource benefit objectives. According to Forest Service reviews of several fires that occurred in 2012, however, some managers said WFDSS did not provide effective decision support for firefighters because the system underestimated fire behavior or did not have current information. According to officials from several agencies, another example of updated wildland fire technology has been the replacement of traditional paper- based fire management plans with electronic geospatial-based plans. Federal wildland fire management policy directs each agency to develop a fire management plan for all areas they manage with burnable vegetation. A fire management plan, among other things, identifies fire management goals for different parts of a field unit. According to an interagency document describing geospatial-based plans, agency officials expect such plans to increase efficiency because the plans can more easily be updated to account for changes in the landscape resulting from fires, fuel reduction treatments, and other management activities. In addition, the electronic format is designed to allow plans to more easily be shared across multiple users, including personnel responding to wildland fires. Agency officials mentioned other technological improvements, such as the development of an “Enterprise Geospatial Portal” providing wildland fire data in geospatial form using a Web-based platform, although many officials also told us that additional improvements are needed in wildland fire technology overall. In addition to specific technologies, in 2012 the Forest Service and Interior issued a report titled “Wildland Fire Information and Technology: Strategy, Governance, and Investments,” representing the agencies’ efforts to develop a common wildland fire information and technology vision and strategy. The agencies signed a Memorandum of Understanding later that same year intended to establish a common management approach for information and technology services. Nevertheless, the 2014 Quadrennial Fire Review concluded that the wildland fire management community does not have an agenda for innovation and technology adoption or a list of priorities, stating that the wildland fire community “sometimes struggles to define common technology priorities and implement integrated, enterprise-level solutions” and noting that there are more than 400 information technology systems in use by the wildland fire community. The report provides recommendations on actions the agencies could consider for improvement; however, because it was issued in May 2015, it is too early to determine what, if any, actions the agencies have taken. In commenting on a draft of this report, Interior stated that the agencies are completing an investment strategy for wildland fire applications and supporting infrastructure, but did not provide an expected date for its completion. Officials from several agencies told us that, since 2009, the agencies have increased training efforts, particularly those aimed at improving line officers’ knowledge about, and response to, wildland fires. Line officers are land unit managers such as national forest supervisors, BLM district managers, and national park superintendents. During a wildland fire, staff from “incident management teams” with specific wildland firefighting and management training manage the response, and line officers associated with the land unit where the fire is occurring must approve major decisions that incident management teams make during the response. Officials at BLM’s Oregon/Washington State Office, for example, told us they provide line officers with day-long simulation exercises, as well as shadowing opportunities that give line officers experience on actual wildland fires. Beginning in 2007, the Forest Service initiated a Line Officer Certification Program and began a coaching and mentoring program to provide on-the-ground experience for preparing line officers to act as agency administrators during wildland fires or other critical incidents. This program is aimed at providing officials that do not have wildland fire experience the opportunity to work under the advisement of a coach with wildland fire experience. According to Forest Service documents, this program has evolved substantially, in part to address the increased demand for skills necessary to manage increasingly complex wildland fires. In May 2015, the Forest Service issued guidance for the program and called for each Forest Service regional office to administer it within the regions. Officials told us that, since 2009, the agencies have, in some cases, changed firefighting tactics to better protect firefighters, including making greater use of natural barriers to contain fire instead of attacking fires directly. The agencies have also issued additional guidance aimed at emphasizing the primacy of firefighter safety. In 2010, the agencies developed and issued the “Dutch Creek Protocol” (named after a wildland fire where a firefighter died), which provided a standard set of protocols for wildland firefighting teams to follow during an emergency medical response or when removing and transporting personnel from a location on a fire. Both the Forest Service and Interior have also issued agency direction stating that firefighter safety should be the priority of every fire manager. The agencies assess the effectiveness of their wildland fire management programs in several ways, including through performance measures, efforts to assess specific activities, and reviews of specific wildland fire incidents. Both the Forest Service and Interior are developing new performance measures and evaluations, in part to help better assess the results of their current emphasis on risk-based management, according to agency officials. In addition, the agencies have undertaken multiple efforts, such as studies, to assess the effectiveness of activities including fuel reduction treatments and aerial firefighting. The agencies also conduct reviews of their responses to wildland fires. However, they have not consistently followed agency policy in doing so or used specific criteria for selecting the fires they have reviewed, limiting their ability to help ensure that their fire reviews provide useful information and meaningful results. Both the Forest Service and Interior use various performance measures, such as the number of WUI acres treated to reduce fuels and the percentage of wildland fires contained during initial attack, to assess their wildland fire management effectiveness. These measures are reported in, among other things, the agencies’ annual congressional budget justifications. Officials from both the Forest Service and Interior told us their performance measures need improvement to more appropriately reflect their approach to wildland fire management and, in June 2015, officials from both agencies told us that they were working to improve them. For example, several performance measures for both agencies use a “stratified cost index” to help analyze suppression costs on wildfires. The index is based on a model that compares the suppression costs of fires that have similar characteristics, such as fire size, fuel types, and proximity to communities, and identifies the percentage of fires with suppression costs that exceeded the index. We found in a June 2007 report, however, that the index was not entirely reliable and that using the index as the basis for comparison may not allow the agencies to accurately identify fires where more, or more-expensive, resources than needed were used. The agencies continue to use the index, but have acknowledged its shortcomings. The Forest Service reported in its fiscal year 2016 budget justification to Congress that improvements were forthcoming. In April 2015, Forest Service officials told us they have incorporated detailed geospatial information into the model on which the index is based to help yield more accurate predictions of suppression expenditures and have submitted the model for peer review. Once that is complete, the agencies plan to begin to implement the updated model, but officials did not provide a time frame for doing so. Both agencies have also made efforts to improve their performance measures to better reflect their emphasis on a risk-based approach to wildland fire management. In fiscal year 2014, Interior began using a new performance measure intended to better reflect a variety of strategies in addition to full suppression: “Percent of wildfires on DOI-managed landscapes where the initial strategy (ies) fully succeeded during the initial response phase.” The same year, the Forest Service began developing a performance measure intended to reflect that, in some cases, allowing naturally-ignited fires to burn can provide natural resource benefits at a lower cost and lower risk to personnel than fully suppressing the fire as quickly as possible: “Percent of acres burned by natural ignition with resource benefits.” Forest Service officials told us they are working with field units to evaluate whether this measure will effectively assess their efforts to implement a risk-based approach to fire management and that they will adjust it as needed. The officials told us they plan to finalize the measure and use it in 2017. Also, in fiscal year 2014, the Forest Service began developing a performance measure that would assess the risk that wildland fire presents to highly valued resources such as communities and watersheds. This measure is known as the “National Forest System wildfire risk index.” According to the agency’s fiscal year 2016 budget justification, it would create an index of relative fire risk based on the likelihood of a large fire affecting these highly valued resources. It may also incorporate factors measuring the relative importance of these resources and the expected effects that might occur from fire. The Forest Service plans to establish a national baseline measure for this index in 2015 and then periodically remeasure it, likely every 2 years, to determine if overall risk has been reduced, according to Forest Service officials. Changes that could affect the index include those resulting from fuel reduction treatments, wildland fire, forest management activities, vegetative growth, and increased WUI development, among others, according to the agency’s 2016 budget justification. As with the performance measure described above, agency officials told us they will evaluate whether the measure meets their needs before adopting it; if it meets their needs, they plan to finalize the measure and use it in 2017. The agencies have also undertaken multiple efforts to assess the effectiveness of particular activities, such as fuel reduction and aerial firefighting. Regarding fuel reduction activities, we found in September 2007 and September 2009 that demonstrating the effectiveness of fuel reduction treatments is inherently complex and that the agencies did not have sufficient information to evaluate fuel treatment effectiveness, such as the extent to which treatments changed fire behavior. Without such information, we concluded that the agencies could not ensure that fuel reduction funds were directed to the areas where they can best minimize risk to communities and natural and cultural resources. Accordingly, we recommended that the agencies take actions to develop additional information on fuel treatment effectiveness. While the agencies took steps to address this recommendation, they are continuing efforts to improve their understanding of fuel treatment effectiveness. For example, the Forest Service and Interior agencies use a system called Fuel Treatment Effectiveness Monitoring to document and assess fuel reduction treatment effectiveness. The Forest Service began requiring such assessments in 2011 and Interior requested such assessments be completed starting in 2012. Under this approach, the agencies are to complete a monitoring report whenever a wildfire interacts with a fuel treatment and enter the information into the system. Officials told us that additional efforts are under way to help understand other aspects of fuel treatment effectiveness. For example, in February 2015, the Joint Fire Science Program completed its strategy to implement the 2014 Fuel Treatment Science Plan. It includes as one of its goals the “development of measures/metrics of effectiveness that incorporate ecological, social, resilience, and resource management objectives at the regional and national level.” The Forest Service and Interior are also implementing an effort known as the Aerial Firefighting Use and Effectiveness Study, begun in 2012 to address concerns about limited performance information regarding the use of firefighting aircraft. As part of this effort, the agencies are collecting information on how aerial retardant and suppressant delivery affects fire behavior and plan to use this and other collected information to track the performance of specific aircraft types, according to the study website. This will help the agencies identify ways to improve their current fleet of aircraft and inform future aerial firefighting operations and aviation strategic planning, according to the website. Agency officials told us the study is not a one-time activity, but is an ongoing effort to continually provide information to help improve their use of firefighting resources. The Forest Service and the Interior agencies have conducted reviews to assess their effectiveness in responding to wildland fires but have not consistently followed agency policy in doing so and did not always use specific criteria for selecting the fires they have reviewed. Officials from both the Forest Service and Interior told us that current agency policy regarding fire reviews overly emphasizes the cost of wildland fire suppression rather than the effectiveness of their response to fire. However, the agencies have neither updated their policies to better reflect their emphasis on effectiveness nor established specific criteria for selecting fires for review and conducting the reviews. By developing such criteria, the agencies may enhance their ability to obtain useful, comparable information about their effectiveness in responding to wildland fires, which, in turn, may help them identify needed improvements in their wildland fire approach. Congressional reports and agency policy have generally called for the agencies to review their responses to wildland fires involving federal expenditures of $10 million or more. For fiscal years 2003 through 2010, congressional committee reports directed the Forest Service and Interior to conduct reviews of large fire incidents, generally for the purpose of understanding how to better contain suppression costs; beginning in fiscal year 2006, these reports included a cost threshold, specifying that such reviews be conducted for fires involving federal expenditures of $10 million or more. The agencies, in turn, have each developed their own policies that generally direct them to review each fire that exceeds the $10 million threshold. The agencies, however, have not consistently conducted reviews of fire incidents meeting the $10 million threshold, in part because, according to officials, current agency policy that includes the $10 million threshold does not reflect the agencies’ focus on assessing the effectiveness of their response to fire. However, the agencies have not developed specific criteria for selecting fire incidents for review. Forest Service officials told us that, rather than selecting all fires with federal expenditures of $10 million or more, they changed their approach to selecting fires to review. These officials told us that focusing exclusively on suppression costs when selecting fires limits the agency in choosing those fires where it can obtain important information and best assess management actions and ensure they are appropriate, risk-based, and effective. Forest Service officials told us the agency judgmentally selects incidents to review based on a range of broad criteria, such as complexity and national significance, taking into account political, social, natural resource, or policy concerns. Using these broad selection criteria, the Forest Service reviewed 5 wildland fires that occurred in 2012 and 10 that occurred in 2013. However, with these broad criteria it is not clear why the Forest Service selected those particular fires and not others. For example, the 2013 Rim Fire, which cost over $100 million to suppress—by far the costliest fire to suppress that year—and burned over 250,000 acres of land, was not among the 2013 fires selected for review. Moreover, the reviews completed for each of those years did not use consistent or specific criteria for conducting the reviews. As of July 2015, the agency had not selected the fires it will review from the 2014 wildland fire season and, when asked, agency officials did not indicate a time frame for doing so. Forest Service officials told us they believe it is appropriate to judgmentally select fires to provide them flexibility in identifying which fires to review and which elements of the fire response to analyze. Nevertheless, Forest Service officials also acknowledged the need to develop more specific criteria for selecting fires to review and conducting the reviews and, in July 2015, told us they are working to update their criteria for doing so. They provided us a draft update of the Forest Service policy manual, but this draft did not contain specific criteria for selecting fires for review or conducting the reviews. Moreover, officials did not provide a time frame for completing their update. Within Interior, BLM officials told us BLM completed its last fire review based on significant cost (i.e., federal expenditures of $10 million or more) in 2013. These officials told us that BLM, similar to the Forest Service, plans to shift the emphasis of its fire reviews to evaluate management actions rather than focusing on cost, and that officials are working to determine criteria for selecting fires for review. Interior headquarters officials told us that FWS and NPS have continued to follow the direction provided through their policies regarding reviews of fires that met the $10 million threshold. Interior headquarters officials, however, acknowledged the need to improve Interior’s approach to selecting fires for review to focus more on information about decision making rather than fire costs. In July 2015, the officials told us they plan to develop criteria other than cost for use by all Interior agencies in selecting fires to review, and that they plan to develop standard criteria for implementing the reviews. They stated that they expect this department-wide effort to be completed by the end of calendar year 2015 but did not provide information about how they planned to develop such criteria or the factors they would consider. Agency reports have likewise cited the need to improve both the processes for selecting fires for review and the implementation of the reviews. A 2010 report, for example, noted the importance of improving the selection of fires to review and stated that the agencies would benefit from a more productive review strategy. The report said the agencies’ existing approach to conducting reviews tended to produce isolated efforts and unrelated recommendations rather than establishing a consistent foundation for continuous improvement. A 2013 report assessing the usefulness of the Forest Service’s five reviews of 2012 fires noted shortcomings in consistency across the reviews, including unclear criteria for selecting fires and conducting reviews, as well as limitations in the specificity of the resulting reports and recommendations. As noted, both agencies have acknowledged the need to improve their criteria for selecting fires to review and conducting the reviews. By developing specific criteria in agency policies for selecting fires for review and conducting the reviews, the agencies may enhance their ability to help ensure that their fire reviews provide useful information and meaningful results. This is consistent with our previous body of work on performance management, which has shown that it is important for agencies to collect performance information to inform key management decisions, such as how to identify problems and take corrective actions and how to identify and share effective approaches. By collecting such performance information, the agencies may be better positioned to identify needed improvements in their wildland fire approach and thereby use their limited resources more effectively. The Forest Service and Interior determine the distribution of fire management resources in part on the basis of historical amounts but are developing new methods intended to better reflect current conditions. For suppression, the Forest Service and Interior manage funding as needed for units to respond to individual wildland fires. For preparedness, the Forest Service and Interior distribute resources based, in part on historical funding levels generated by an obsolete system. The agencies are working to replace the system and develop new tools to help them distribute resources to reflect current landscape conditions, values at risk, and the probability of wildland fire. For fuel reduction, until recently, the Forest Service and Interior both distributed funds using the same system. In 2014, the Forest Service began using a new system to help it distribute fuel reduction funding in ways that better reflect current conditions. Interior is working to develop a system that likewise reflects current conditions. The agencies manage funding for suppression at the national level as needed for field units to respond to individual wildland fires. The overall amount of suppression funding the agencies obligate is determined by the complexity and number of wildland fire responses over the course of the fiscal year and can vary considerably from year to year. For example, federal agencies obligated approximately $1.7 billion for suppression in fiscal year 2006, $809 million in fiscal year 2010, and $1.9 billion in fiscal year 2012. (See app. II for more detailed information about suppression obligations by the Forest Service and the Interior agencies for fiscal years 2004 through 2014.) Each year, the agencies estimate the expected level of funding for suppression activities using the average of the previous 10 years of suppression obligations. The estimated amount, however, has often been less than the agencies’ actual suppression obligations, particularly for the Forest Service. In all but 2 years since 2000, Forest Service suppression obligations have exceeded the 10-year average that forms the basis of the agency’s annual appropriation. To pay for wildfire suppression activities when obligations are greater than the amount appropriated for suppression, the Forest Service and Interior may transfer funds from other programs within their respective agencies as permitted by law. As we found in a prior report, these transfers can affect the agencies’ ability to carry out other important land management functions that are key to meeting their missions, such as restoration of forest lands and other improvements. For example, according to a Forest Service report, funding transfers led to a canceled fuel reduction project on the Sante Fe National Forest and the deferral of critical habitat acquisition on the Cibola National Forest, both located in New Mexico. In their annual budget justifications for fiscal years 2015 and 2016, the agencies proposed an alternative mechanism to fund suppression activities. Under that proposal, the agencies would receive 70 percent of the standard 10-year average of suppression obligations as their appropriation for wildland fire suppression, which reflects the amount the agencies spend to suppress approximately 99 percent of wildland fires. If suppression obligations exceed this amount, additional funds would be made available from a disaster funding account. Forest Service and Interior officials told us this proposal would allow them to better account for the variable nature of wildland fire seasons and reduce or eliminate the need to transfer funds from other accounts to pay for suppression. In addition, legislation pending in Congress would change how certain wildland fire suppression operations are funded. The Forest Service and Interior distribute preparedness funding to their regions and agencies, respectively, based in part on information generated from a system that is now obsolete. The agencies attempted to develop a new system to distribute preparedness funding, but ended that effort in 2014 and are now working to develop different tools and systems. In distributing preparedness funds to individual forests, some Forest Service regions have developed additional tools to help them distribute funds; similarly, three of the four Interior agencies have developed additional tools to help them distribute preparedness funds to their regions. Overall preparedness obligations in 2014 totaled about $1.0 billion for the Forest Service and about $274 million for the Interior agencies. (See app. II for detailed information on each of the agencies’ obligations for preparedness for fiscal years 2004 through 2014.) To determine the distribution of preparedness funds from Forest Service headquarters to its regions, and from Interior to the department’s four agencies with wildland fire management responsibilities, the Forest Service and Interior rely primarily on amounts that are based on results from a budgeting system known as the National Fire Management Analysis System (NFMAS). That system, however, was terminated in the early 2000s, according to agency officials. Relying on the results from the last year NFMAS was used, and making only incremental changes from year to year, the Forest Service and Interior have not made significant shifts in the funding distribution across their respective regions and agencies over time, and they have generally maintained the same number and configuration of firefighting assets (e.g., fire engines and crews) in the same geographic areas from year to year. Several agency officials, however, told us that these amounts no longer reflect current conditions, in part because of changes to the landscape resulting from increased human development, climate change, and changes to land management policies that consider natural resource values differently than they did when NFMAS was in use. Beginning in 2002, the agencies attempted to replace NFMAS with an interagency system designed to help them determine the optimal mix and location of firefighting assets and distribute funds accordingly. In developing this system, known as the Fire Program Analysis system, the agencies’ goal was to develop “a comprehensive interagency process for fire planning and budget analysis identifying cost-effective programs to achieve the full range of fire management goals and objectives.” According to agency documents, this effort proved problematic because of the difficulty in modeling various aspects of wildland fire management. In addition, agency officials told us it is difficult to design a system that could account for multiple agencies’ different needs and varying missions. After more than a decade of work, and investment that Forest Service officials estimated at approximately $50 million, the agencies terminated the system’s development in September 2014. At that time, they stated that it “only delivered inconsistent and unacceptable results.” Since the termination of the Fire Program Analysis system, the agencies have continued to rely on results based on the terminated NFMAS, but have begun working on new tools to help them distribute funding and assets based on current conditions and updated information. Forest Service headquarters officials told us the agency is developing a new tool called the Wildland Fire Investment Portfolio System. According to these officials, this proposed system is intended to model scenarios such as large shifts in firefighting assets, various potential dispatch procedures, and changes in fire behavior due to climate change, which will allow managers, both at the national and individual unit level, to conduct resource trade-off analyses and assess whether assets are being used effectively. Forest Service officials told us that the agency is in the early stages of developing this proposed system and anticipates using it for planning and analysis purposes in fiscal year 2016. Interior documents state that Interior is developing a system called the Risk-Based Wildland Fire Management model, which Interior will use to help support funding distribution decisions to the four Interior agencies for both preparedness and fuel reduction. The proposed system will assess the probability and likely intensity of wildland fire, values at risk, and the expected value of acres likely to burn. A key element of this system will be the development of strategic business plans by each of the four Interior agencies, detailing how each agency intends to distribute its preparedness and fuel reduction funding to reduce the risks from wildland fire on its lands. Interior officials said that, once the agencies provide these business plans, Interior will assess them in making funding distribution decisions among the agencies. According to several Interior agency officials, identifying priority values at risk across Interior’s four agencies may be challenging given the variation in agency missions and the types of lands they manage. For example, a threatened species located primarily on BLM lands may be among BLM’s highest priorities, but a forested area relied upon by an Indian tribe for its livelihood may be among BIAs’ highest priorities. Interior officials told us that they expect to identify the prioritized values and issue guidance on the proposed system by the end of calendar year 2015, and then use its results to inform their fiscal year 2016 funding distributions to the four agencies. Once the Forest Service distributes preparedness funding to regions, it gives regions discretion to determine how to subsequently distribute funding to individual national forests, as long as those determinations are consistent with policy and annual budget program direction. Forest Service headquarters officials told us they do not plan to direct regions to use any specific system to help inform distributions to national forests, so that regions can have flexibility in distributing their funds and take into account local conditions and priorities. According to agency officials, most regions distribute funding to individual national forests based on historical amounts resulting from NFMAS. However, two regions have changed the way they determine funding distribution to individual national forests to better reflect current landscape conditions. The Rocky Mountain Region uses a new system that ranks each of its forests according to a “risk priority score.” According to regional officials, use of the system has resulted in shifts in funding across forests in the region; for example, the officials told us they have provided additional resources to forests along Colorado’s Front Range because of increased development in the WUI. The Pacific Northwest Region also uses its own funding distribution tool, which considers elements such as fire occurrence and the number of available assets to develop a weighted value for each forest in the region. The region distributes the funding proportionally based on the values calculated for each forest. Once obtaining preparedness funds from Interior, each agency—which, as noted, have their own land management responsibilities and missions—distributes these funds to its units. Three of these agencies— BLM, FWS, and NPS—use newer systems and current information, such as updated fuel characterization and fire occurrence data, to distribute funding to their regional offices. The fourth agency, BIA, generally uses historical-based amounts (i.e., NFMAS results), but has made some changes to reflect updated priorities. The regions subsequently distribute funding to individual land units, typically using the same systems. The four agencies’ approaches are described below. BLM. Since 2010, BLM officials told us they have used results from the Fire Program Decision Support System to help determine funding distributions to state offices. The system analyzes BLM’s fire workload and complexity using four components: fire suppression workload, fuel types, human risk, and additional fire resources, and assigns scores to state offices accordingly. Based on the resulting analyses, BLM has shifted funding across state offices to help better reflect current conditions. BLM officials told us that most states use the new system to help inform the distribution of funding to their units. BLM is also developing an additional component of the Fire Program Decision Support System to help offices determine the appropriate number of firefighting assets needed in each area. Officials expect to apply the new component with their overall system in the fall of 2015. FWS. In 2014, FWS began distributing its preparedness funding to regions using the Preparedness Allocation Tool. Officials told us that the tool uses information such as historical wildland fire occurrence, proximity to WUI areas, and other information, to inform preparedness funding distributions to regions. Agency officials told us that results from this tool did not generally identify the need for large funding shifts across units, but rather helped identify some smaller shifts to better reflect current landscape conditions. Officials with one FWS region told us that the tool has helped the agency provide better assurance that funding amounts are risk-based and transparent. NPS. Since 2013, primarily in response to their overall wildland fire management program funding reductions, NPS began using a system called the Planning Data System to determine what level of firefighting workforce the agency could afford under different budget distribution scenarios. The system generates personnel requirements for each NPS unit by establishing a minimum number of people for any unit that meets certain criteria. Those results are rolled up to also provide regional workforce requirements. The results generated from this system showed that some NPS regions, as well as individual park units, had existing wildland fire organizations that they could no longer adequately support in light of reduced budgets. BIA. BIA relies primarily on historical funding amounts derived from a system similar to NFMAS. However, BIA officials told us they have made adjustments to the historical amounts using professional judgment. BIA officials told us that the regions also still primarily use historical-based amounts to distribute funding to their units. The officials told us they will wait until Interior finalizes its Risk Based Wildland Fire Management model before they develop a new funding distribution tool. Beginning in 2009, the Forest Service and Interior both used systems collectively known as the Hazardous Fuels Prioritization and Allocation System (HFPAS) to distribute fuel reduction funds. Officials told us these systems, based on similar concepts and approaches, were developed by the agencies to provide an interagency process for distributing fuel reduction funding to the highest-priority projects. Starting in 2014, the Forest Service instead began using a new system, which, according to officials, allows the agency to more effectively distribute fuel reduction funds. Interior continues to distribute fuel reduction funding to the four agencies based on funding amounts derived from HFPAS, but it plans to develop a new system for distributing funds to reflect more current conditions and risks. Overall fuel reduction obligations in 2014 totaled about $302 million for the Forest Service and about $147 million for the Interior agencies. (See app. II for detailed information on the agencies’ fuel reduction obligations for fiscal years 2004 through 2014.) Forest Service officials told us their new system identifies locations where the highest probability of wildland fire intersects with important resources, such as residential areas and watersheds critical to municipal water supplies. These officials told us the new system allows the agency to invest its fuel reduction funds in areas where there are both a high probability of wildland fires and important resources at risk. In contrast, according to officials, HFPAS in some cases prioritized funding for areas where important resources, such as extensive WUI, existed but where the potential for wildland fires was low. The new system has identified locations for funding adjustments to Forest Service regions. For example, in 2015 the agency’s Eastern and Southern Regions received a smaller proportion of fuel reduction funding than they had previously received, and some western regions saw increases, because results from the system showed that the western regions had more areas with both important resources and high wildland fire potential. The Forest Service directs its regions to distribute fuel reduction funding to national forests using methods consistent with national information, as well as with specific local data. A senior Forest Service official told us that, as a result, most regions distribute funding to individual national forests based on information generated using HFPAS, augmented with local data. One region has developed a more updated distribution approach. Specifically, in 2012, the Rocky Mountain Region, in conjunction with the Rocky Mountain Research Station and Forest Service headquarters, developed a fuel reduction funding distribution tool that generates a risk priority score for each forest in the region. The risk priority score is based on fire probability, resources at risk from fire, potential fire intensity, and historical fire occurrence. Each forest’s risk priority score is used to inform the region’s distribution of funding to the national forests. Interior currently distributes fuel reduction funding to its agencies based on the funding amounts derived from HFPAS results that were last generated in 2013. Interior officials also told us they plan to stop using HFPAS results and are planning to use the new system they are developing, the Risk-Based Wildland Fire Management model, to reflect current information on conditions and risks in distributing fuel reduction funds. Within Interior, officials from the four agencies told us they have developed, or are in the process of developing, funding distribution systems and tools while they wait for Interior to complete the Risk-Based Wildland Fire Management model. BLM, for example, uses a fuel reduction funding distribution tool that maps values at risk, including WUI, critical infrastructure, sagebrush habitat, and invasive species data. BLM combines this information with data on wildland fire probability to create a spatial illustration of the values at risk relative to potential fire occurrence. BLM then uses the results of this analysis to fund its state offices. BIA uses its own tool to distribute fuel reduction funding to its regions based on wildland fire potential data generated by the Forest Service. That information is then combined with fire occurrence history and workload capacity to generate a model that shows potential fire risk and capacity across BIA units. FWS officials told us they are developing a fuel reduction funding distribution tool, expected to be used for fiscal year 2016, which considers fire risks associated with each FWS unit. FWS officials told us this tool will identify risk reduction over longer periods of time, contain an accountability function to monitor results, and will share many attributes with FWS’ preparedness allocation tool. NPS officials told us the agency will continue to rely on historical amounts, based largely on HFPAS. Similar to the previous Interior distribution approach, NPS distributes funding for specific projects identified at the headquarters level. However, if a unit is not able to implement an identified project, the unit can substitute other projects, as necessary. Faced with the challenge of working to protect people and resources from the unwanted effects of wildland fire while also recognizing that fire is an inevitable part of the landscape, the federal wildland fire agencies have taken steps aimed at improving their approaches to wildland fire management. Their 2009 update to interagency guidance, for example, was designed to continue moving away from the agencies’ decades-long emphasis on suppressing all fires, by giving fire managers more flexibility in responding to fires. In addition, the agencies are working to develop more up-to-date systems for distributing wildland fire resources. A central test of such changes, however, is the extent to which they help ensure appropriate and effective agency responses to fires when they occur. The agencies have acknowledged the importance of reviewing their responses to individual wildland fires to understand their effectiveness and identify possible improvements. However, the agencies have not systematically followed agency policy regarding such fire reviews and, in the reviews they have conducted, they have not used specific criteria in selecting fires and conducting the reviews. Officials from both the Forest Service and Interior told us cost alone should not be the basis for such reviews and have acknowledged the need to improve their criteria for selecting fires and conducting reviews. Draft guidance provided by the Forest Service did not contain specific criteria for such reviews, however, and Interior officials did not provide information about how they planned to develop criteria or the factors they would consider. By developing specific criteria for selecting fires to review and conducting the reviews, and making commensurate changes to agency policies to help ensure the criteria are consistently applied, the agencies may enhance their ability to ensure that their fire reviews provide useful information and meaningful results. This, in turn, could better position them to identify improvements in their approach to wildland fire management and thereby use their limited resources more effectively. To better ensure that the agencies have sufficient information to understand the effectiveness of their approach to wildland fires, and to better position them to develop appropriate and effective strategies for wildland fire management, we recommend that the Secretaries of Agriculture and the Interior direct the Chief of the Forest Service and the Director of the Office of Wildland Fire to take the following two actions: Develop specific criteria for selecting wildland fires for review and for conducting the reviews as part of their efforts to improve their approach to reviewing fires, and Once such criteria are established, revise agency policies to align with the specific criteria developed by the agencies. We provided a draft of this report for review and comment to the Departments of Agriculture and the Interior. The Forest Service (responding on behalf of the Department of Agriculture) and Interior generally agreed with our findings and recommendations, and their written comments are reproduced in appendixes IV and V respectively. Both agencies stated that they are developing criteria for selecting fires to review and conducting reviews. Both agencies also provided technical comments which we incorporated into our report as appropriate. Interior also provided additional information about wildland fire technology, which we likewise incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretaries of Agriculture and the Interior, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions regarding this report, please contact me at (202) 512-3841 or fennella@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to the report are listed in appendix VI. This report examines (1) key changes the federal wildland fire agencies have made in their approach to wildland fire management since 2009, (2) how the agencies assess the effectiveness of their wildland fire management programs, and (3) how the agencies determine the distribution of their wildland fire management resources. To perform this work, we reviewed laws, policies, guidance, academic literature, and reviews related to federal wildland fire management. These included the 1995 Federal Wildland Fire Management Policy and subsequent implementation guidance, the Interagency Standards for Fire and Fire Aviation Operations, and the 2009 and 2014 Quadrennial Fire Reviews. We also interviewed headquarters officials from each of the five federal land management agencies responsible for wildland fire management—the Forest Service in the Department of Agriculture and the Bureau of Indian Affairs (BIA), Bureau of Land Management (BLM), Fish and Wildlife Service (FWS), and National Park Service (NPS) in the Department of the Interior—as well as Interior’s Office of Wildland Fire. We also conducted semistructured interviews of regional officials in each of the agencies to obtain information about issues specific to particular regions and understand differences across regions. We interviewed wildland fire management program officials from each of the 9 Forest Service regional offices, 11 of BLM’s 12 state offices, and 2 regional offices each for BIA, FWS, and NPS. We focused these regional interviews primarily on the Forest Service and BLM because those agencies receive the greatest percentage of appropriated federal wildland fire funding. For BIA, FWS, and NPS, we selected the two regions from each agency that received the most funds in those agencies—BIA’s Northwest and Western Regions, FWS’s Southwest and Southeast Regions, and NPS’s Pacific West and Intermountain Regions. We conducted a total of 25 semistructured interviews of regional offices. During these semistructured interviews we asked about (1) significant changes to the agencies’ approach to wildland fire management, including regional efforts to implement the policy areas identified in the 2009 interagency Guidance for Implementation of Federal Wildland Fire Management Policy, (2) agency efforts to assess the effectiveness of their wildland fire management activities, and (3) agency processes for determining the distribution of fire management resources. We focused our review on three primary components of wildland fire management— suppression, preparedness, and fuel reduction—because they account for the highest spending amounts among wildland fire management activities. To address our first objective, we reviewed agency documents, such as policy and guidance, as well as other documents such as agency budget justifications, to identify changes the agencies have made to their approach to managing wildland fire since 2009, efforts the agencies have undertaken to address wildland fire management challenges, agency- identified improvements resulting from those changes, and challenges associated with implementing them. Our review focuses on changes since 2009 because we last completed a comprehensive review of wildland fire management in that year, and because the agencies’ last significant change to interagency wildland fire management guidance for implementing the Federal Wildland Fire Management Policy also occurred that year. To further our understanding of these issues, we also asked about these changes in our interviews with agency headquarters officials. In particular, we asked about the extent to which changes to the agencies’ wildland fire management approaches have occurred or are planned, the effects of these changes, and associated challenges. In addition, we relied on the semistructured interviews of regional officials described above to understand how the regions implemented national direction and policy. We analyzed the responses provided to us during the interviews to identify common themes about prominent changes since 2009, and challenges associated with implementing those changes. The information we report represents themes that occurred frequently in our interviews with both regional and headquarters officials. We did not report on changes described during our interviews that were not directly related to wildland fire management, such as changes to general workforce management policies. To address our second objective, we reviewed agency strategic plans and budget justifications describing performance measures, as well as other documents associated with agency efforts to assess their programs, including fire reviews. We also reviewed legislative and agency direction related to fire reviews, including agency policies and the Interagency Standards for Fire and Fire Aviation Operations, and reviewed reports resulting from fire reviews conducted by the agencies since 2009. We compared agency practices for conducting fire reviews to direction contained in relevant agency policy. We also interviewed headquarters officials to identify the agencies’ key performance measures and the extent to which those measures reflect changing approaches to wildland fire management. In our interviews with headquarters and regional officials, we also inquired about other mechanisms the agencies use to determine the effectiveness of their wildland fire management programs, as well as any changes they are making in this area. To obtain additional insight into the use of performance information on the part of federal agencies, we also reviewed our previous reports related to agencies’ use of performance information. To address our third objective, we reviewed relevant agency budget documentation, including annual budget justifications and documentation of agency obligations, as well as information about the tools and systems the agencies use to distribute funds and resources. We did not assess the design or use of any of the agencies’ tools or systems for distributing funds. We interviewed agency officials at the headquarters and regional levels to identify the processes they use for budget formulation and resource distribution. We asked about the extent to which these processes have changed in recent years at the headquarters and regional levels for each of the five agencies and the extent to which they have changed funding and resource amounts. We also obtained data from the Forest Service and from Interior’s Office of Wildland Fire on obligations for each of the three primary wildland fire management components— suppression, preparedness, and fuel reduction—from fiscal years 2004 through 2014, analyzing the data in both nominal (actual) and constant (adjusted for inflation) terms. Adjusting nominal dollars to constant dollars allows the comparison of purchasing power across fiscal years. To adjust for inflation, we used the gross domestic product price index with 2014 as the base year. We reviewed budget documents and obligation data provided by the agencies, and interviewed agency officials knowledgeable about the data, and we found the data sufficiently reliable for the purposes of this report. We conducted this performance audit from August 2014 to September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides information on preparedness, fuel reduction, and suppression obligations by the Forest Service and the Department of the Interior’s four wildland fire agencies—the Bureau of Indian Affairs, Bureau of Land Management, Fish and Wildlife Service, and National Park Service—for fiscal years 2004 through 2014. Figures 4, 5, and 6 show overall agency obligations for preparedness, fuel reduction, and suppression for fiscal years 2004 through 2014. Individual agencies’ obligations for each of the three programs are described later in this appendix. Table 1 and figure 7 show annual Forest Service wildland fire management obligations for fiscal years 2004 through 2014. Preparedness obligations increased from nearly $760 million in fiscal year 2004 to about $1.0 billion in fiscal year 2014, an average increase of 3.2 percent per year, or 1.2 percent after adjusting for inflation. Fuel reduction obligations increased from about $284 million in fiscal year 2004 to about $302 million in fiscal year 2014, an average annual increase of 0.6 percent, or a 1.4 percent decrease after adjusting for inflation. Suppression obligations fluctuated from year to year, with a high of about $1.4 billion in fiscal year 2012 and a low of about $525 million in fiscal year 2005. Table 2 and figure 8 show annual Bureau of Indian Affairs wildland fire management obligations for fiscal years 2004 through 2014. Preparedness obligations decreased from nearly $58 million in fiscal year 2004 to about $51 million in fiscal year 2014, an average annual decrease of 1.3 percent per year, or 3.2 percent after adjusting for inflation. Fuel reduction obligations decreased from about $39 million in fiscal year 2004 to about $30 million in fiscal year 2014, an average annual decrease of 2.6 percent, or 4.5 percent after adjusting for inflation. Suppression obligations fluctuated from year to year, with a high of about $105 million in fiscal year 2012 and a low of about $43 million in fiscal year 2010. Table 3 and figure 9 show annual Bureau of Land Management wildland fire management obligations from fiscal years 2004 through 2014. Preparedness obligations increased from nearly $152 million in fiscal year 2004 to about $160 million in fiscal year 2014, an average annual increase of 0.6 percent per year, or a 1.4 percent decrease after adjusting for inflation. Fuel reduction obligations decreased from about $98 million in fiscal year 2004 to about $75 million in fiscal year 2014, an average annual decrease of 2.6 percent, or 4.6 percent after adjusting for inflation. Suppression obligations fluctuated from year to year, with a high of about $299 million in fiscal year 2007 and a low of about $130 million in fiscal year 2009. Table 4 and figure 10 show annual Fish and Wildlife Service wildland fire management obligations for fiscal years 2004 through 2014. Preparedness obligations decreased from about $33 million in fiscal year 2004 to about $27 million in fiscal year 2014, an average annual decrease of 2.1 percent per year, or 4.1 percent after adjusting for inflation. Fuel reduction obligations decreased from about $24 million in fiscal year 2004 to about $21 million in fiscal year 2014, an average annual decrease of 1.5 percent, or 3.5 percent after adjusting for inflation. Suppression obligations fluctuated from year to year, with a high of about $41 million in fiscal year 2011 and a low of about $4 million in fiscal year 2010. Table 5 and figure 11 show annual National Park Service wildland fire management obligations for fiscal years 2004 through 2014. Obligations for preparedness increased from about $35 million in fiscal year 2004 to about $36 million in fiscal year 2014, an average annual increase of 0.5 percent per year, or a 1.5 percent decrease after adjusting for inflation. Fuel reduction obligations decreased from about $31 million in fiscal year 2004 to about $21 million in fiscal year 2014, an average annual decrease of 3.7 percent, or 5.6 percent after adjusting for inflation. Suppression obligations fluctuated from year to year, with a high of about $58 million in fiscal year 2006 and a low of about $22 million in fiscal year 2009. The Forest Service and the Department of the Interior use different approaches for paying the base salaries of their staff during wildland fire incidents. For periods when firefighters are dispatched to fight fires, the Forest Service generally pays its firefighters’ base salaries using suppression funds, whereas Interior pays its firefighters’ base salaries primarily using preparedness funds. Forest Service officials told us that under this approach, regional offices, which are responsible for hiring firefighters in advance of the fire season, routinely hire more firefighters than their preparedness budgets will support, assuming they can rely on suppression funds to pay the difference. Forest Service officials told us that their funding approach helps the agency maintain its firefighting capability over longer periods of time during a season and accurately track the overall costs of fires. Interior officials told us they choose to use preparedness funds to pay their firefighters’ base salaries during a wildland fire because it constitutes a good business practice. According to a Wildland Fire Leadership Council document, in 2003, the council agreed that the agencies would use a single, unified approach and pay firefighters’ base salary using Interior’s method of using preparedness funds. However, the council subsequently noted that in 2004 the Office of Management and Budget directed the Forest Service to continue using suppression funds to pay firefighters’ base salaries. The agencies have used separate approaches since 2004. In addition to the individual named above, Steve Gaty (Assistant Director), Ulana M. Bihun, Richard P. Johnson, Lesley Rinner, and Kyle M. Stetler made key contributions to this report. Important contributions were also made by Cheryl Arvidson, Mark Braza, William Carrigg, Carol Henn, Benjamin T. Licht, Armetha Liles, and Kiki Theodoropoulos. Wildland Fire Management: Improvements Needed in Information, Collaboration, and Planning to Enhance Federal Fire Aviation Program Success. GAO-13-684. Washington, D.C.: August 20, 2013. Station Fire: Forest Service’s Response Offers Potential Lessons for Future Wildland Fire Management. GAO-12-155. Washington, D.C.: December 16, 2011. Arizona Border Region: Federal Agencies Could Better Utilize Law Enforcement Resources in Support of Wildland Fire Management Activities. GAO-12-73. Washington, D.C.: November 8, 2011. Wildland Fire Management: Federal Agencies Have Taken Important Steps Forward, but Additional Action Is Needed to Address Remaining Challenges. GAO-09-906T. Washington, D.C.: July 21, 2009. Wildland Fire Management: Federal Agencies Have Taken Important Steps Forward, but Additional, Strategic Action Is Needed to Capitalize on Those Steps. GAO-09-877. Washington, D.C.: September 9, 2009. Wildland Fire Management: Actions by Federal Agencies and Congress Could Mitigate Rising Fire Costs and Their Effects on Other Agency Programs. GAO-09-444T. Washington, D.C.: April 1, 2009. Forest Service: Emerging Issues Highlight the Need to Address Persistent Management Challenges. GAO-09-443T. Washington, D.C.: March 11, 2009. Wildland Fire Management: Interagency Budget Tool Needs Further Development to Fully Meet Key Objectives. GAO-09-68. Washington, D.C.: November 24, 2008. Wildland Fire Management: Federal Agencies Lack Key Long- and Short-Term Management Strategies for Using Program Funds Effectively. GAO-08-433T. Washington, D.C.: February 12, 2008. Forest Service: Better Planning, Guidance, and Data Are Needed to Improve Management of the Competitive Sourcing Program. GAO-08-195. Washington, D.C.: January 22, 2008. Wildland Fire Management: Better Information and a Systematic Process Could Improve Agencies’ Approach to Allocating Fuel Reduction Funds and Selecting Projects. GAO-07-1168. Washington, D.C.: September 28, 2007. Natural Hazard Mitigation: Various Mitigation Efforts Exist, but Federal Efforts Do Not Provide a Comprehensive Strategic Framework. GAO-07-403. Washington, D.C.: August 22, 2007. Wildland Fire: Management Improvements Could Enhance Federal Agencies’ Efforts to Contain the Costs of Fighting Fires. GAO-07-922T. Washington, D.C.: June 26, 2007. Wildland Fire Management: A Cohesive Strategy and Clear Cost- Containment Goals Are Needed for Federal Agencies to Manage Wildland Fire Activities Effectively. GAO-07-1017T. Washington, D.C.: June 19, 2007. Wildland Fire Management: Lack of Clear Goals or a Strategy Hinders Federal Agencies’ Efforts to Contain the Costs of Fighting Fires. GAO-07-655. Washington, D.C.: June 1, 2007. Department of the Interior: Major Management Challenges. GAO-07-502T. Washington, D.C.: February 16, 2007. Wildland Fire Management: Lack of a Cohesive Strategy Hinders Agencies’ Cost-Containment Efforts. GAO-07-427T. Washington, D.C.: January 30, 2007. Biscuit Fire Recovery Project: Analysis of Project Development, Salvage Sales, and Other Activities. GAO-06-967. Washington, D.C.: September 18, 2006. Wildland Fire Rehabilitation and Restoration: Forest Service and BLM Could Benefit from Improved Information on Status of Needed Work. GAO-06-670. Washington, D.C.: June 30, 2006. Wildland Fire Suppression: Better Guidance Needed to Clarify Sharing of Costs between Federal and Nonfederal Entities. GAO-06-896T. Washington, D.C.: June 21, 2006. Wildland Fire Suppression: Lack of Clear Guidance Raises Concerns about Cost Sharing between Federal and Nonfederal Entities. GAO-06-570. Washington, D.C.: May 30, 2006. Wildland Fire Management: Update on Federal Agency Efforts to Develop a Cohesive Strategy to Address Wildland Fire Threats. GAO-06-671R. Washington, D.C.: May 1, 2006. Natural Resources: Woody Biomass Users’ Experiences Provide Insights for Ongoing Government Efforts to Promote Its Use. GAO-06-694T. Washington, D.C.: April 27, 2006. Natural Resources: Woody Biomass Users’ Experiences Offer Insights for Government Efforts Aimed at Promoting Its Use. GAO-06-336. Washington, D.C.: March 22, 2006. Wildland Fire Management: Timely Identification of Long-Term Options and Funding Needs Is Critical. GAO-05-923T. Washington, D.C.: July 14, 2005. Natural Resources: Federal Agencies Are Engaged in Numerous Woody Biomass Utilization Activities, but Significant Obstacles May Impede Their Efforts. GAO-05-741T. Washington, D.C.: May 24, 2005. Natural Resources: Federal Agencies Are Engaged in Various Efforts to Promote the Utilization of Woody Biomass, but Significant Obstacles to Its Use Remain. GAO-05-373. Washington, D.C.: May 13, 2005. Technology Assessment: Protecting Structures and Improving Communications during Wildland Fires. GAO-05-380. Washington, D.C.: April 26, 2005. Wildland Fire Management: Progress and Future Challenges, Protecting Structures, and Improving Communications. GAO-05-627T. Washington, D.C.: April 26, 2005. Wildland Fire Management: Forest Service and Interior Need to Specify Steps and a Schedule for Identifying Long-Term Options and Their Costs. GAO-05-353T. Washington, D.C.: February 17, 2005. Wildland Fire Management: Important Progress Has Been Made, but Challenges Remain to Completing a Cohesive Strategy. GAO-05-147. Washington, D.C.: January 14, 2005. Wildland Fires: Forest Service and BLM Need Better Information and a Systematic Approach for Assessing the Risks of Environmental Effects. GAO-04-705. Washington, D.C.: June 24, 2004. Federal Land Management: Additional Guidance on Community Involvement Could Enhance Effectiveness of Stewardship Contracting. GAO-04-652. Washington, D.C.: June 14, 2004. Wildfire Suppression: Funding Transfers Cause Project Cancellations and Delays, Strained Relationships, and Management Disruptions. GAO-04-612. Washington, D.C.: June 2, 2004. Biscuit Fire: Analysis of Fire Response, Resource Availability, and Personnel Certification Standards. GAO-04-426. Washington, D.C.: April 12, 2004. Forest Service: Information on Appeals and Litigation Involving Fuel Reduction Activities. GAO-04-52. Washington, D.C.: October 24, 2003. Geospatial Information: Technologies Hold Promise for Wildland Fire Management, but Challenges Remain. GAO-03-1047. Washington, D.C.: September 23, 2003. Geospatial Information: Technologies Hold Promise for Wildland Fire Management, but Challenges Remain. GAO-03-1114T. Washington, D.C.: August 28, 2003. Wildland Fire Management: Additional Actions Required to Better Identify and Prioritize Lands Needing Fuels Reduction. GAO-03-805. Washington, D.C.: August 15, 2003. Wildland Fires: Forest Service’s Removal of Timber Burned by Wildland Fires. GAO-03-808R. Washington, D.C.: July 10, 2003. Forest Service: Information on Decisions Involving Fuels Reduction Activities. GAO-03-689R. Washington, D.C.: May 14, 2003. Wildland Fires: Better Information Needed on Effectiveness of Emergency Stabilization and Rehabilitation Treatments. GAO-03-430. Washington, D.C.: April 4, 2003. Major Management Challenges and Program Risks: Department of the Interior. GAO-03-104. Washington, D.C.: January 1, 2003. Results-Oriented Management: Agency Crosscutting Actions and Plans in Border Control, Flood Mitigation and Insurance, Wetlands, and Wildland Fire Management. GAO-03-321. Washington, D.C.: December 20, 2002. Wildland Fire Management: Reducing the Threat of Wildland Fires Requires Sustained and Coordinated Effort. GAO-02-843T. Washington, D.C: June 13, 2002. Wildland Fire Management: Improved Planning Will Help Agencies Better Identify Fire-Fighting Preparedness Needs. GAO-02-158. Washington, D.C.: March 29, 2002. Severe Wildland Fires: Leadership and Accountability Needed to Reduce Risks to Communities and Resources. GAO-02-259. Washington, D.C.: January 31, 2002. Forest Service: Appeals and Litigation of Fuel Reduction Projects. GAO-01-1114R. Washington, D.C.: August 31, 2001. The National Fire Plan: Federal Agencies Are Not Organized to Effectively and Efficiently Implement the Plan. GAO-01-1022T. Washington, D.C.: July 31, 2001. Reducing Wildfire Threats: Funds Should be Targeted to the Highest Risk Areas. GAO/T-RCED-00-296. Washington, D.C.: September 13, 2000. Fire Management: Lessons Learned From the Cerro Grande (Los Alamos) Fire. GAO/T-RCED-00-257. Washington, D.C.: August 14, 2000. Fire Management: Lessons Learned From the Cerro Grande (Los Alamos) Fire and Actions Needed to Reduce Fire Risks. GAO/T-RCED-00-273. Washington, D.C.: August 14, 2000.
Wildland fire plays an important ecological role in maintaining healthy ecosystems. Over the past century, however, various land management practices, including fire suppression, have disrupted the normal frequency of fires and have contributed to larger and more severe wildland fires. Wildland fires cost billions to fight each year, result in loss of life, and cause damage to homes and infrastructure. In fiscal years 2009 through 2014, the five federal wildland fire agencies obligated a total of $8.3 billion to suppress wildland fires. GAO was asked to review multiple aspects of federal wildland fire management across the five federal wildland fire management agencies. This report examines (1) key changes the federal wildland fire agencies have made in their approach to wildland fire management since 2009, (2) how the agencies assess the effectiveness of their wildland fire management programs, and (3) how the agencies determine the distribution of their wildland fire management resources. GAO reviewed laws, policies, and guidance related to wildland fire management; reviewed agency performance measures; analyzed obligation data for fiscal years 2004 through 2014; and interviewed officials from the five agencies, as well as Interior's Office of Wildland Fire. Since 2009, the five federal agencies responsible for wildland fire management—the Forest Service within the Department of Agriculture and the Bureau of Indian Affairs, Bureau of Land Management, Fish and Wildlife Service, and National Park Service in the Department of the Interior—have made several key changes in their approach to wildland fire management. One key change was the issuance of agency guidance in 2009 that provided managers with more flexibility in responding to wildland fires. This change allowed managers to consider different options for response given land management objectives and the risk posed by the fire. The agencies also worked with nonfederal partners to develop a strategy aimed at coordinating wildland fire management activities around common goals. The extent to which the agencies' steps have resulted in on-the-ground changes varied across agencies and regions, however, and officials identified factors, such as proximity to populated areas, that may limit their implementation of some changes. The agencies assess the effectiveness of their wildland fire management programs in several ways, including through performance measures and reviews of specific wildland fires. The agencies are developing new performance measures, in part to help better assess the results of their current emphasis on risk-based management, according to agency officials. However, the agencies have not consistently followed agency policy regarding fire reviews, which calls for reviews of all fires resulting in federal suppression expenditures of $10 million or more, nor have they used specific criteria for the reviews they have conducted. GAO has previously found that it is important for agencies to collect performance information to inform key management decisions and to identify problems and take corrective actions. Forest Service and Interior officials said focusing only on suppression costs does not allow them to identify the most useful fires for review, and they told GAO they are working to improve their criteria for selecting fires to review and conducting these reviews. Forest Service officials did not indicate a time frame for their efforts, and while they provided a draft update of their policy manual, it did not contain specific criteria. Interior officials told GAO they expect to develop criteria by the end of 2015, but did not provide information about how they planned to develop such criteria or the factors they would consider. By developing specific criteria for selecting fires to review and conducting reviews, and making commensurate changes to agency policies, the agencies may enhance their ability to help ensure that their fire reviews provide useful information about the effectiveness of their wildland fire activities. The Forest Service and Interior determine the distribution of fire management resources for three primary wildland fire activities of suppression, preparedness, and fuel reduction in part on the basis of historical funding amounts. For suppression, the Forest Service and Interior manage suppression funding as needed for responding to wildland fires, estimating required resources using the average of the previous 10 years of suppression obligations. For preparedness and fuel reduction, the Forest Service and Interior distribute resources based primarily on historical amounts. Both are working to distribute resources in ways that better reflect current conditions, including developing new systems that they stated they plan to begin using in fiscal year 2016. GAO recommends that the agencies develop specific criteria for selecting wildland fires for review and conducting the reviews, and revise agency policies accordingly. The agencies generally agreed with GAO's findings and recommendations.
During the three decades in which uranium was used in the government’s nuclear weapons and energy programs, for every ounce of uranium that was extracted from ore, 99 ounces of waste were produced in the form of mill tailings—a finely ground, sand-like material. By the time the government’s need for uranium peaked in the late 1960s, tons of mill tailings had been produced at the processing sites. After fulfilling their government contracts, many companies closed down their uranium mills and left large piles of tailings at the mill sites. Because the tailings were not disposed of properly, they were spread by wind, water, and human intervention, thus contaminating properties beyond the mill sites. In some communities, the tailings were used as building materials for homes, schools, office buildings, and roads because at the time the health risks were not commonly known. The tailings and waste liquids from uranium ore processing also contaminated the groundwater. Tailings from the ore processing resulted in radioactive contamination at about 50 sites (located mostly in the southwestern United States) and at 5,276 nearby properties. The most hazardous constituent of uranium mill tailings is radium. Radium produces radon, a radioactive gas whose decay products can cause lung cancer. The amount of radon released from a pile of tailings remains constant for about 80,000 years. Tailings also emit gamma radiation, which can increase the incidence of cancer and genetic risks. Other potentially hazardous substances in the tailings include arsenic, molybdenum, and selenium. DOE’s cleanup authority was established by the Uranium Mill Tailings Radiation Control Act of 1978. Title I of the act governs the cleanup of uranium ore processing sites that were already inactive at the time the legislation was passed. These 24 sites are referred to as Title I sites. Under the act, DOE is to clean up the Title I sites, as well as nearby properties that were contaminated. In doing so, DOE works closely with the affected states and Indian tribes. DOE pays for most of this cleanup, but the affected states contribute 10 percent of the costs for remedial actions. Title II of the act covers the cleanup of sites that were still active when the act was passed. These 26 sites are referred to as Title II sites. Title II sites are cleaned up mostly at the expense of the private companies that own and operate them. They are then turned over to the federal government for long-term custody. Before a Title II site is turned over to the government, NRC works with the sites’ owners/operators to make sure that sufficient funds will be available to cover the costs of long-term monitoring and maintenance. The cleanup of surface contamination consists of four key steps: (1) identifying the type and extent of contamination; (2) obtaining a disposal site; (3) developing an action plan, which describes the cleanup method and specifies the design requirements; and (4) carrying out the cleanup using the selected method. Generally, the primary cleanup method consists of enclosing the tailings in a disposal cell—a containment area that is covered with compacted clay to prevent the release of radon and then topped with rocks or vegetation. Similarly, the cleanup of groundwater contamination consists of identifying the type and extent of contamination, developing an action plan, and carrying out the cleanup using the selected method. According to DOE, depending on the type and extent of contamination, and the possible health risks, the appropriate method may be (1) leaving the groundwater as it is, (2) allowing it to cleanse itself over time (called natural flushing), or (3) using an active cleanup technique such as pumping the water out of the ground and treating it. Mr. Chairman, we now return to the topics discussed in our report: the status and cost of DOE’s surface and groundwater cleanup and the factors that could affect the federal government’s costs in the future. Since our report was issued on December 15, 1995, DOE has made additional progress in cleaning up and licensing Title I sites. As of February 1996, DOE’s surface cleanup was complete at 16 of the 24 Title I sites, under way at 6 additional sites, and on hold at the remaining 2 sites.Of the 16 sites where DOE has completed the cleanup, 4 have been licensed by NARC as meeting the standards of the Environmental Protection Agency (EPA). Ten of the other 12 sites are working on obtaining such a license, and the remaining two sites do not require licensing because the tailings were relocated to other sites. Additionally, DOE has completed the surface cleanup at about 97 percent of the 5,276 nearby properties that were also contaminated. Although DOE expects to complete the surface cleanup of the Title I sites by the beginning of 1997, it does not expect all of Narc activities to be completed until the end of 1998. As for the cleanup of groundwater at the Title I sites, DOE began this task in 1991 and currently estimates completion in about 2014. Since its inception in 1979, DOE’s project for cleaning up the Title I sites has grown in size and in cost. In 1982, DOE estimated that the cleanups would be completed in 7 years and that only one pile of tailings would need to be relocated. By 1992, however, the Department was estimating that the surface cleanup would be completed in 1998 and that 13 piles of tailings would need to be relocated. The project’s expansion was caused by several factors, including the development of EPA’s new groundwater protection standards; the establishment or revision of other federal standards addressing such things as the transport of the tailings and the safety of workers; and the unexpected discovery of additional tailings, both at the processing sites and at newly identified, affected properties nearby. In addition, DOE made changes in its cleanup strategies to respond to state and local concerns. For example, at the Grand Junction, Colorado, site the county’s concern about safety led to the construction of railroad transfer facilities and the use of both rail cars and trucks to transport contaminated materials. The cheaper method of simply trucking the materials would have routed extensive truck traffic through heavily populated areas. Along with the project’s expansion came cost increases. In the early 1980s, DOE estimated that the total cleanup cost—for both the surface and groundwater—would be about $1.7 billion. By November 1995, this estimate had grown to $2.4 billion. DOE spent $2 billion on surface cleanup activities through fiscal year 1994 and expects to spend about $300 million more through 1998. As for groundwater, DOE has not started any cleanup. By June 1995, the Department had spent about $16.7 million on site characterization and various planning activities. To make the cleanup as cost-effective as it can, DOE is proposing to leave the groundwater as it is at 13 sites, allow the groundwater to cleanse itself over time at another 9 sites, and to use an active cleanup method at 2 locations in Monument Valley and Tuba City, Arizona. The final selection of cleanup strategies depends largely on DOE’s reaching agreement with the affected states and tribes. At this point, however, DOE has yet to finalize agreements on any of the groundwater cleanup strategies it is proposing. At the time we issued our report, the cleanups were projected to cost at least another $130 million using the proposed strategies, and perhaps as much as $202 million. More recently, a DOE groundwater official has indicated that the Department could reduce these costs by shifting some of the larger costs to earlier years; reducing the amounts built into the strategies for contingencies, and using newer, performance-based contracting methods. Once all of the sites have been cleaned up, the federal government’s responsibilities, and the costs associated with them, will continue far into the future. What these future costs will amount to is currently unknown and will depend largely on how three issues are resolved. First, because the effort to clean up the groundwater is in its infancy, its final scope and cost will depend largely on the remediation methods chosen and the financial participation of the affected states. It is too early to know whether the affected states or tribes will ultimately persuade DOE to implement more costly remedies than those the Department has proposed or whether any of the technical assumptions underlying DOE’s proposed strategies will prove to be invalid. If either of these outcomes occurs, DOE may implement more costly cleanup strategies than it has proposed, thereby increasing the final cost of the groundwater cleanup. DOE has already identified five sites where it believes it may have to implement more expensive alternatives than the ones it initially proposed. In addition, the final cost of the groundwater cleanup depends on the affected states’ ability and willingness to pay their share of the cleanup costs. According to a DOE official, Pennsylvania, Oregon, and Utah may not have funding for the groundwater cleanup program. DOE believes that it is prohibited from cleaning up the contamination if the states do not pay their share. Accordingly, as we noted in our report, we believe that the Congress may want to consider whether and under what circumstances DOE can complete the cleanup of the sites if the states do not provide financial support. Second, DOE may incur further costs to dispose of uranium mill tailings that are unearthed in the future in the Grand Junction, Colorado, area. DOE has already cleaned up the Grand Junction processing site and over 4,000 nearby properties, at a cost of about $700 million. Nevertheless, in the past, about a million cubic yards of tailings were used in burying utility lines and constructing roads in the area and remain today under the utility corridors and road surfaces. In future years, utility and road repairs will likely unearth these tailings, resulting in a potential public health hazard if the tailings are mishandled. In response to this problem, DOE is working with NRC and Colorado officials to develop a plan for temporarily storing the tailings as they are unearthed and periodically transporting them to a nearby disposal cell—referred to as the Cheney cell, located near the city of Grand Junction—for permanent disposal. Under this plan, the city or county would be responsible for hauling the tailings to the disposal cell, and DOE would be responsible for the cost of placing the tailings in the cell. The plan envisions that a portion of the Cheney disposal cell would remain open, at an annual cost of several hundred thousand dollars. When the cell is full, or after a period of 20 to 25 years, it would be closed. However, DOE does not currently have the authority to implement this plan because the law requires that all disposal cells be closed upon the completion of the surface cleanup. Accordingly, we suggested in our report that the Congress might want to consider whether DOE should be authorized to keep a portion of the Cheney disposal cell open to dispose of tailings that are unearthed in the future in this area. Finally, DOE’s costs for long-term care are still somewhat uncertain. DOE will ultimately be responsible for long-term custody, that is, the surveillance and maintenance, of both Title I and Title II sites, but the Department only bears the financial responsibility for these activities at Title I sites. For Title II sites, the owners/operators are responsible for funding the long-term surveillance and maintenance. Although NRC’s minimum one-time charge to site owners/operators is supposed to be sufficient to cover the cost of long-term custody so that they, not the federal government, bear these costs in full, NRC has not reviewed its estimate of basic surveillance costs since 1980, and DOE is currently estimating that basic monitoring will cost about 3 times more than NRC estimates. Moreover, while DOE maintains that ongoing routine maintenance will be needed at all sites, NRC’s charge does not provide any amount for ongoing maintenance. In light of the consequent potential shortfall in maintenance funds, our report recommended that NRC and DOE work together to update the charge for basic surveillance and determine if routine maintenance will be required at each site. On the basis of our recommendations, NRC officials agreed to reexamine the charge and determine the need for routine maintenance at each site. They also said that they are working with DOE to clarify the Department’s role in determining the funding requirements for long-term custody. Mr. Chairman, this concludes our prepared statement. We will be pleased to answer any questions that you or Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the status and cost of the Department of Energy's (DOE) uranium mill tailings cleanup program and the factors that could affect future costs. GAO noted that: (1) surface contamination cleanup has been completed at two-thirds of the identified sites and is underway at most of the others; (2) if DOE completes its surface cleanup program in 1998, it will have cost $2.3 billion, taken 8 years longer than expected, and be $261 million over budget; (3) DOE cleanup costs increased because there were more contaminated sites than anticipated, some sites were more contaminated than others, and changes were needed to respond to state and local concerns; (4) the future cost of the uranium mill tailings cleanup will largely depend on the future DOE role in the program, the remediation methods used, and the willingness of states to share final cleanup costs; and (5) the Nuclear Regulatory Commission needs to ensure that enough funds are collected from the responsible parties to protect U.S. taxpayers from future cleanup costs.
The military’s legacy disability evaluation process begins at a military treatment facility when a physician identifies a condition that may interfere with a servicemember’s ability to perform his or her duties. On the basis of medical examinations and the servicemember’s medical records, a medical evaluation board (MEB) identifies and documents any conditions that may limit a servicemember’s ability to serve in the military. The servicemember’s case is then evaluated by a physical evaluation board (PEB) to make a determination of fitness or unfitness for duty. Each of the services conducts this process for its servicemembers. The Army has three PEBs, which are located at Fort Sam Houston, Texas; Walter Reed Army Medical Center in Washington, D.C.; and Fort Lewis, Washington. The Navy and Air Force each have one PEB: the Navy’s is located at the Washington Navy Yard in Washington, D.C., and the Air Force’s is located in San Antonio, Texas. The PEB process begins with an “informal” PEB— an administrative review of the case file by PEB adjudicators without the presence of the servicemember. If the servicemember is found to be unfit due to medical conditions incurred in the line of duty, the informal PEB assigns the servicemember a combined percentage rating for those unfit conditions, and the servicemember is discharged from duty. Disability ratings range from 0 (least severe) to 100 percent (most severe) in increments of 10 percent. Depending on the overall disability rating and number of years of active duty or equivalent service, the servicemember found unfit with compensable conditions is entitled to either monthly disability retirement benefits or lump sum disability severance pay. Servicemembers have opportunities to appeal the results of their disability evaluations. If servicemembers are dissatisfied with the informal PEB’s decisions, they may request a hearing with a “formal” PEB. If they then disagree with the formal PEB’s findings, they can, under certain conditions, appeal to the reviewing authority of the PEB. As servicemembers navigate DOD’s disability evaluation system, they interface with staff who play key roles in supporting them through the process. Military physicians involved in the MEB process play a fundamental role because they are responsible for documenting in the disability evaluation case file the medical conditions that may limit a servicemember’s ability to serve in the military. To prepare this documentation, military physicians may require that servicemembers obtain additional medical evidence from specialty physicians, such as a psychiatrist. Throughout the MEB and PEB processes, board liaisons serve a key role by explaining the process to servicemembers and constructing the case files. The liaisons inform servicemembers of board results and of deadlines at key decision points in the process. The military also provides legal counsel to advise and represent servicemembers going through the disability evaluation process, although servicemembers may retain their own representative at their own expense. In addition to receiving disability benefits from DOD, veterans with service-connected disabilities may receive compensation from VA for lost earnings capacity. In contrast to DOD’s disability evaluation system, which evaluates only medical conditions affecting servicemembers’ fitness for duty, VA evaluates all medical conditions claimed by the veteran, whether or not they were previously evaluated by the military services’ medical evaluation process. Although a servicemember may file a VA claim while still in the military, he or she can only obtain disability compensation from VA as a veteran. VA’s disability compensation claims process starts when a veteran submits a claim to VA’s Veterans Benefits Administration (VBA). The claim lists the medical conditions that the veteran believes are service-connected. For each claimed condition, VA must determine if credible evidence is available to support the veteran’s contention of service connection. A service representative assists the veteran in gathering the relevant evidence to evaluate the claim, which may include the veteran’s military service records and treatment records from VA medical facilities and private medical service providers. Also, if necessary for reaching a decision on a claim, VBA arranges for the veteran to receive a medical examination conducted by clinicians (including physicians, nurse practitioners, or physician assistants) certified to perform the exams under VA’s Compensation and Pension program. Once a claim has all of the necessary evidence, a VA rating specialist evaluates the claim and determines whether the claimant is eligible for benefits. If so, the rating specialist assigns a percentage rating. If VA finds that a veteran has one or more service-connected disabilities with a combined rating of at least 10 percent, the agency will pay monthly compensation. The veteran can claim additional benefits over time, for example, if a service-connected disability worsens or surfaces at a later point in time. In November 2007, DOD and VA began piloting the IDES, a joint disability evaluation system to eliminate duplication in their separate systems and to expedite receipt of VA benefits for wounded, ill, and injured servicemembers. The IDES merges DOD and VA processes, so that servicemembers begin their VA disability claim while they undergo their DOD disability evaluation, rather than sequentially, making it possible for them to receive VA disability benefits shortly after leaving military service. Specifically, the IDES merges DOD and VA’s separate exam processes into a single exam process conducted to VA standards. This single exam—which may involve more than one medical examination (for example, by different specialists)—in conjunction with the servicemembers’ medical records, is used by military service PEBs to make a determination of servicemembers’ fitness for continued military service, and by VA as evidence of service-connected disabilities. The single exam may be performed by medical staff working for either VA, DOD, or a private provider contracted with either agency. consolidates DOD and VA’s separate rating phases into one VA rating phase. If the informal PEB has determined that a servicemember is unfit for duty, VA rating specialists prepare two ratings—one for the conditions that DOD determined made a servicemember unfit for duty, which DOD uses to provide military disability benefits, and the other for all service- connected disabilities, which VA uses to determine VA disability benefits. Ratings for the IDES are prepared by rating specialists at VA’s Baltimore and Seattle regional offices. provides VA case managers to perform outreach and nonclinical case management and explain VA results and processes to servicemembers. By consolidating DOD and VA’s separate medical exams and ratings, the IDES eliminates several steps from the existing “legacy” systems (see fig. 1). In designing the IDES, DOD and VA established goals to provide VA benefits to active duty servicemembers within 295 days of being referred into the system, and to reserve component members within 305 days. In establishing the 295- and 305-day goals, they also established timeliness goals for the specific steps of the IDES process (see fig. 2). DOD and VA first piloted the IDES at 3 Washington, D.C., area military treatment facilities, beginning in November 2007 (see table 1). They added 18 military facilities to the pilot in fiscal year 2009 and 6 in fiscal year 2010. DOD and VA stated that expansion to additional sites was intended to assess the IDES system in a variety of geographic areas and to test the agencies’ capacity to handle additional caseload. According to DOD, the 27 pilot sites represented almost half of the servicemembers in the military services’ disability evaluation systems. In their planning documents for the IDES pilot, DOD and VA stated that they were basing their evaluation of the effectiveness of the IDES pilot on whether it has achieved three key goals relative to the legacy process: increased servicemember satisfaction, improved case-processing time, and a reduction in servicemember appeal rates. In addition, they also examined IDES program costs. To determine whether they have achieved their goals, the agencies surveyed servicemembers in the IDES pilot and legacy systems and are using a data system—called the Veterans Tracking Application (VTA)—that enables them to track case processing time and appeals. They have been monitoring their progress on these goals through weekly reports. In August 2010, DOD and VA officials issued an interim report to Congress summarizing their evaluation results to date. In this report, the agencies concluded that servicemembers who went through the IDES pilot were more satisfied than those who went through the legacy system, and that the IDES process met the agencies’ goals of delivering VA benefits to active duty servicemembers within 295 days and to reserve component servicemembers within 305 days. Specifically, they reported that, as of February 2010, the IDES process took an average of 274 days to complete for active duty servicemembers and 281 days for reserve component members who, according to the interim report, comprise 15 percent of IDES participants. Furthermore, they concluded that the IDES pilot has achieved a faster processing time than the legacy system, which they estimated to be 540 days. While overall results were promising, data presented in the report had some limitations, and the report itself did not include certain analyses. For example, DOD officials told us that the 540-day estimate for the legacy process was based upon a review of a small and nonrepresentative sample of legacy cases during the agencies’ “table top” planning exercise in August 2007. In addition, although DOD officials told us that they planned to compare average processing times of pilot cases with a broader sample of legacy cases, and to determine whether fewer servicemembers are appealing the findings of informal PEBs and formal PEBs in the pilot compared with the legacy, the interim report did not include these comparisons. In addition, in their planning documents for the IDES pilot, DOD and VA indicated that they were establishing a goal to deliver VA benefits to 80 percent of members in the IDES pilot within the 295- and 305-day time frames. However, their interim report did not discuss whether this goal was met. Our review of DOD and VA’s data and weekly reports generally confirm DOD and VA’s findings, as of early 2010. However, while the agencies have largely met their overall goal to increase servicemember satisfaction and met their timeliness goal as of February 2010, since that time, case processing times have been steadily increasing as the caseload has increased. In addition, not all of the service branches are achieving the same results. Servicemember satisfaction: Our review of the survey data that DOD used for the interim report (as of February 2010), as well as a recent weekly report, indicate that, on average, servicemembers in the IDES process have had higher satisfaction levels than those who went through the legacy process. In addition, a higher percentage of servicemembers who went through the IDES process felt that the process was fair compared with those who went through the legacy system. However, servicemembers in the Air Force who went through the IDES pilot indicated less satisfaction with the process than those who went through the legacy system, though Air Force members represented a small proportion of pilot cases—about 7 percent of those enrolled in the pilot. We reviewed the agencies’ survey methodology and generally found their survey design and conclusions to be sound (see app. I for further information on our review). Average case processing times: The agencies have been meeting their 295- day and 305-day timeliness goals for much of the past 2 years, but more recent weekly reports indicate case processing time has been increasing and that they are now missing their goal for active duty members. As of August 29, 2010, the agencies missed the goal for active duty servicemembers by 1 day, while still meeting the 305-day goal for reserve component members by 7 days. Processing times have increased as caseload has increased, from about 5,750 active cases in February to about 9,650 cases in August 2010. We reviewed the reliability of the VTA data upon which the agencies based their analyses and generally found these data to be sufficiently reliable for purposes of these analyses. The increases in overall case processing time and caseloads mirror the trends at individual sites. For each pilot site, case processing times have generally increased as workloads have increased. For example, figure 3 shows the case processing times 1 year or more after implementation and in August 2010 for the first seven pilot sites. Of the four military services, only the Army and Navy were achieving the 295- and 305-day goals on average, as of February 2010, and only the Army was achieving these goals as of August 2010. Because the Army comprises a large proportion of cases (approximately 60 percent of IDES pilot cases that have completed the whole process), it has lowered the overall average processing time to near or below the established goals. Figure 4 shows the average case processing times for active duty, by service, as of August 2010. (See app. II for reserve component.) As of February 2010, the agencies also had not met the goal of processing 80 percent of all pilot cases within targeted time frames. Specifically, about 60 percent of active duty pilot cases have been completed within 295 days, according to our analysis of the agencies’ case data intended for their interim report. Further, none of the four military services have achieved this goal, although the Army has had the highest rate of cases (66 percent) meeting the goal, while only 42 percent of Air Force cases were processed within the time frame (see fig. 5 for active duty and app. II for reserve component). DOD and VA planned to compare the case processing times of servicemembers in the IDES pilot and servicemembers who, between fiscal years 2005 and 2009, were enrolled in the legacy system at pilot sites prior to pilot implementation, but significant gaps in the legacy case data preclude reliable comparisons. DOD compiled the legacy case data from each of the military services and the VA, but the military services each had slightly different disability evaluation processes, used different data systems, and did not track the same information. As a result, information needed to conduct a comparison is not available for all services. For example, the Navy, Marine Corps, and Air Force legacy data do not have information on when the servicemember was referred into the disability evaluation system and, as a result, case-processing time for the legacy system DOD-wide cannot be known. DOD officials said they planned to estimate legacy case processing time by approximating the dates that servicemembers in the Navy, Marine Corps, and Air Force were referred into the disability evaluation process, but their methodology was based on a limited number of Army cases (see app. I for further information). In addition, for legacy cases across all military services, VA was not able to provide data on the date VA benefits were delivered, so total case processing time from referral to delivery of VA benefits cannot be measured. However, while legacy case data are not sufficiently reliable for comparison with the IDES overall, the Army’s legacy data appear to be reliable on some key processing dates, making some limited comparisons possible. Our analysis of Army legacy data suggests that, under the legacy process, active duty Army cases took 369 days to complete the DOD legacy process and reach the VA rating phase—though this figure does not include time to complete the VA rating and provide the benefits to servicemembers—compared with 266 days to deliver VA benefits to servicemembers under the pilot, according to the agencies’ August weekly report. However, Army comparisons cannot be generalized to the other services. The agencies also planned to compare servicemembers’ appeal rates in the pilot and legacy systems, but similar gaps in the legacy data preclude a comparison DOD-wide. For example, the legacy data that DOD compiled did not contain data on appeals of informal PEB decisions to the formal PEB in the Navy and Marines, and consequently the rate of appeals across the military in the legacy system is unknown. While the Army’s appeals data appear to be more reliable, potentially making some limited comparisons possible, the agencies’ method for comparing pilot appeals with legacy has limitations. DOD officials told us they are planning to compare the proportion of informal PEB decisions that were appealed to a formal PEB hearing in the pilot and legacy systems. However, this will not take into account that, under the legacy system, a servicemember could appeal the informal PEB’s decision for two reasons—because they were dissatisfied with the fitness decision or the disability rating the PEB assigned, while in the IDES, they can only appeal the informal PEB decision to a formal PEB if they are dissatisfied with the fitness decision. Under the IDES, servicemembers who disagree with the disability rating can appeal to VA for a rating reconsideration. By not including appeals to VA for rating reconsiderations, the agencies may overestimate the decrease in appeals in the IDES pilot. For example, our analysis of data as of early 2010 for the Army indicates that Army members in the pilot appealed 7.5 percent of informal PEB decisions. However, when appeals to VA are factored in, 13 percent of Army members in the pilot filed an appeal, which is the same proportion as in the legacy system (see fig. 6). In addition to evaluating the three goals, DOD and VA initially planned a cost-benefit analysis of the IDES program but have only completed an analysis of costs. According to data provided to us in August 2010, DOD projects that costs directly associated with implementing the IDES will be $63 million greater per year when compared with the legacy system, after full expansion of the IDES. In October 2010, VA reported to us total IDES cost estimates of approximately $50 million for fiscal year 2011—about $33 million for VBA, which provides VA case managers and rating staff to the IDES, and $17 million for the Veterans Health Administration (VHA), which provides medical staff to perform the single exams. These analyses did not quantify the value of potential benefits created by the pilot, for example time savings from DOD physicians no longer needing to perform disability examinations, which allows them to perform other duties. As DOD and VA tested the IDES at different facilities and added caselo to the pilot, they encountered several challenges that led to delays in certain phases of the process. Among these were insufficient staffing, challenges in conducting the single exams, logistical challenges related integrating VA staff, as well as housing and managing servicemembers going through the IDES. DOD and VA were able to address some, but not all, of these challenges as they arose. DOD and VA have not provided sufficient numbers of staff in many of th IDES locations, affecting their ability to complete certain phases of the IDES process within the goals they established. Officials at most of the 1 pilot sites we visited said they have experienced staffing shortages to a t least some extent, with a few sites—Fort Ca particular—experiencing severe shortages. rson and Fort Stewart, in VA or contract examiners: At three pilot sites we visited—Fort Carson, Fort Polk, and Fort Stewart—local officials said that a lack of VA or VA contractor staff who could perform the required single medical exams led to bottlenecks in the process. For example, as of August 2010, exams at Fort Carson have taken an average of 140 days to complete for active duty servicemembers, according to the agencies’ data, far from achieving their goal to complete single medical exams within 45 days (see fig. 7; s ee also app. II for processing times for reserve component members). Across all pilot sites, exams have taken 68 days to complete for active duty servicemembers, on average, with 8 of the 27 pilot sites meeting the 45 goal. examiners. For instance, Fort Carson’s IDES process was particularly hampered by a lack of mental health specialists; in contrast, VA officials serving the Fort Polk pilot site said perform specialty medical exams but did not have enough examiners to complete general medical exams. The 8 pilot sites that met the 45-day goal for completing single exams include 2 Air Force sites, 5 Army sites, and 1 Navy site that met the 45-day goal for servicemembers in both the Navy and Marine Corps. One additional site (Camp Pendleton) met the 45-day goal for Navy members but did not meet it for Marine Corps members. The Navy PEB determines fitness decisions for servicemembers in the Marine Corps. VA rating staff: Officials at the Baltimore rating office—one of the two VA offices that conduct disability ratings for the IDES pilot—expressed significant concerns that they were understaffed, in part due to staff turnover. DOD and VA data show that, overall, the VA rating offices are not meeting the agencies’ goal to complete ratings within 15 days, taking 39 days on average for active duty servicemembers and 42 days for reserve component members. We could not determine case processing times at each individual VA rating office, since DOD and VA’s weekly monitoring reports do not provide processing times for the rating phase by office. The weekly reports also do not provide data on caseloads at each office. Although the Baltimore office currently has fewer rating staff than Seattle, VA officials said that it has prepared ratings for the majority of IDES pilot cases, based on the way in which VA has allocated cases between the two offices. The Baltimore office handles cases for the Air Force, Navy, Marines, and 5 of the 15 Army pilot sites, while the Seattle office conducts ratings for the remaining 10 Army pilot sites. VA officials said that to address staffing shortages in Baltimore, they have assigned staff from other VA offices to assist the Baltimore office. VA case managers: DOD and VA have set a target for each VA case manager to handle no more than 30 cases at a time, but two sites we visited—Fort Carson and Fort Stewart—appeared to be far from these targets. At Fort Carson, three VA case managers told us they were handling about 900 cases when we visited in April 2010, for a caseload ratio of roughly 1:300. At the time of our visit in June 2010, Fort Stewart had over 750 active cases with two VA case managers, for a caseload ratio of approximately 1:375. Although local officials we spoke with at both sites told us that the numbers of VA case managers were insufficient, an official at VA’s central office told us that VA bases staffing of case managers on the number of new (not pending) cases each month, and the agencies’ data indicates the average number of new cases per VA case manager has been about 25 at each site. The VA official said that the reason local case managers felt understaffed was likely due to other process inefficiencies. In addition, the official told us VA can reassign staff from other VA programs to assist case managers at IDES pilot sites as needed. At some of the other pilot sites we visited, local officials also told us they had concerns at times about the numbers of VA case managers available to handle the site’s caseload, but VA was able to add staff. VA case managers at two Air Force sites we visited—Travis and Vance Air Force Bases—indicated that their caseloads were manageable. We were unable to independently determine the extent to which VA is meeting its caseload target because VA does not collect national data on actual caseloads per case manager. DOD board liaisons: At most of the sites we visited, local officials expressed concerns about insufficient numbers of DOD board liaisons, who serve as servicemembers’ DOD case managers. DOD guidance has been inconsistent on the caseload target for DOD board liaisons. While DOD’s operations manual for the IDES pilot sets a caseload target of at most 30 cases per board liaison, guidance on the general disability evaluation system sets the target at a maximum of 20 cases per liaison. DOD and VA’s documents related to planning for IDES expansion indicate that DOD is striving for a 1:20 caseload target in the IDES. However, 19 of the 27 pilot sites did not meet the 1:30 caseload target, and 23 did not meet the 1:20 target (see fig. 10). Local DOD and VA officials attributed staffing shortages to higher than anticipated caseloads and difficulty finding qualified staff in rural areas. At several of the pilot sites we visited, officials said that caseloads were higher than the initial estimates that they had based staffing levels upon. DOD officials said that they had based caseload estimates on a 1-year history of caseload at each site. While some sites have added staff as caseloads increased, others, such as Fort Polk, located in central Louisiana, have had difficulty finding qualified staff, particularly physicians, in this rural area. Two of the pilot sites we visited—Fort Carson and Fort Stewart—were particularly challenged to provide staff in response to surges in caseload, which occurred when Army units were preparing to deploy to combat zones. Through the Army’s predeployment medical assessment process, large numbers of servicemembers were determined to be unable to deploy due to a medical condition and were referred to the IDES within a short period of time, overwhelming the staff. These two sites were unable to quickly increase staffing levels, particularly clinicians performing the single exam. The VA medical center conducting the single medical exams for Fort Carson experienced turnover among its examiners at the same time that the caseload surged, while at Fort Stewart, the contractor performing the single medical exams had difficulties finding qualified physicians in a rural area of Georgia. To address caseload surges, examiners were reassigned from other locations to the pilot sites. For example, VA officials told us they assigned examiners from other VA medical centers to the Fort Carson IDES and established a contract with a private-sector provider to complete the exams that VA examiners would normally have performed for veterans in the area claiming VA disability compensation. At Fort Stewart, the contractor told us that they had reassigned examiners from their Atlanta clinic to Fort Stewart. Issues related to the completeness and clarity of single exam summaries were an additional cause of delays in the VA rating phase of the IDES process. Officials from VA rating offices said that some exam summaries did not contain information necessary to make a rating or fitness decision, or were unclear as to the examiners’ diagnoses and conclusions. As a result, VA rating office staff must ask the examiner to clarify the summary or add information and, in some cases, redo the exam, adding time to the process. In addition, VA rating staff told us that it is sometimes unclear who they should contact if they identify insufficiencies in an exam summary and finding the appropriate person also adds time. However, the extent to which insufficient exam summaries caused delays in the IDES process is unknown because DOD and VA’s VTA system does not track whether an exam summary had to be returned to the examiner or whether it was resolved. Due to these limitations, VA officials told us that VA rating staff have created logs of outstanding insufficient exams and sent them to VA examiners to correct. VA officials attributed the problems with exam summaries to several factors, including the difficulty of conducting exams for IDES pilot cases, which may entail evaluating many complex medical conditions and may involve several physicians and specialists. In addition, VA officials indicated that, at sites with exam backlogs, such as at Fort Carson, it may be difficult for examiners to ensure quality when are trying to complete exams quickly. Furthermore, VA staff noted that some errors were common, such as missing information for musculoskeletal conditions and traumatic brain injury, suggesting that some examiners may not be aware of the information required for certain types of medical conditions. Finally, while examiners are supposed to receive the servicemember’s complete medical records prior to the date of the exam, some VA examiners also told us that they did not receive the records in time for the exam in some cases, or the records were not well-organized. As a result, they lacked key information, such as the servicemember’s medical history and results of laboratory tests. According to the agencies’ operations manual for the IDES pilot, the DOD board liaison should compile the complete medical records within 10 days of an active duty servicemember being referred to the IDES, but some DOD officials we spoke with said that it is sometimes difficult to obtain all of the records, particularly when servicemembers have received treatment from private-sector physicians. In addition, while the single exam in the IDES eliminates duplicative exams performed by DOD and VA in the legacy system, it raises the potential for there to be disagreements about diagnoses of servicemembers’ conditions, with implications for their disability ratings, as well as processing times. DOD officials we spoke with in our interviews and site visits also said that their physicians sometimes disagree with VA medical diagnoses, particularly for mental health conditions, and this has extended processing times for some cases. In addition, since medical diagnoses are a basis for VA’s disability ratings, DOD may subsequently disagree with the ratings VA completed for determining DOD disability benefits. The number of cases with disagreements about diagnoses and ratings, and the extent to which they have increased processing time, are unknown because the VTA system does not track when a case has had such disagreements. However, officials at 4 of the 10 pilot sites we visited said that military physicians have disagreed with VA diagnoses in at least some cases. In addition, PEB officials in two of the three military services—the Army and the Navy—said that they have sometimes disagreed with the rating VA produced for determining DOD disability benefits. An example can illustrate the implications of differences in diagnoses. Officials at Army pilot sites informed us about cases in which a military physician had treated members for a mental condition, such as anxiety or depressive disorder. However, when the members went to see the VA examiners for their single exam, the examiners diagnosed them with posttraumatic stress disorder (PTSD). When such cases were sent to the PEB, it returned them to the MEB because it was unclear to the PEB which conditions should be the basis of their decision on the servicemembers’ fitness for duty. The cases then languished because the military physicians experienced difficulties resolving the discrepancy with the VA diagnosis. To address such processing delays, the Army issued guidance in February 2010 stating that MEB physicians should review all of the medical records (including the results of the single exam) and determine whether to revise their diagnoses. If after doing so the MEB physician maintains that their original diagnosis is accurate, they should write a memorandum summarizing the basis of their decision, and the PEB should accept the MEB’s diagnosis. Some Army officials we spoke with believe that this guidance has been helpful for enabling cases to move forward when there are differences in diagnoses. The other services do not have written guidance on how to address differences in diagnoses, though Navy officials told us that they have provided verbal guidance to their physicians, and Air Force officials said they have not had cases with significant disagreements about diagnoses. In some cases, due to the differences in diagnoses, DOD has also disagreed with the rating that VA prepared for DOD disability benefits, particularly in cases involving servicemembers with mental health conditions. For example, Army and Navy officials told us about cases in which the PEB found the servicemember unfit due to a mental condition, such as major depression, and asked VA to complete a rating for this condition. However, VA returned a rating for occupational and social impairment caused by PTSD, since the examiner had diagnosed the member with PTSD. DOD requires a rating for only the conditions for which the member was found unfit for duty because it can only provide disability benefits for those conditions. However, according to VA regulations for rating mental disorders, VA does not rate each mental health condition individually; rather, VA bases its rating on the degree to which the combination of symptoms of mental disorders cause occupational and social impairment. As such, when rating mental health conditions for IDES cases, VA officials said that rating specialists would consider both the symptoms of mental conditions diagnosed by DOD physicians and those identified by the VA examiner. Both Army and Navy PEB officials said that they generally accept VA ratings in these cases, even though the rating is not for the unfitting conditions alone. However, they noted that, if they feel the VA rating is in error, there is no guidance on how disagreements about servicemembers’ ratings should be resolved. Army and Navy officials said that they may return the case to VA and informally requ est that they reconsider the case, though Navy PEB officials said that they are hesitant to do so because it may further delay the case. DOD and VA officials attributed disagreements about diagnoses to several factors. They noted that VA examiners may not have received or reviewed the servicemembers’ medical records prior to the exam, and therefore may not be aware of the medical conditions for which the members had been previously diagnosed and treated. In addition, DOD and VA identify conditions for different purposes in the disability evaluation system. While DOD identifies conditions that make a servicemember unable to perform their duties, VA identifies all service-connected conditions. As such, VA examiners are likely to identify a broader set of conditions than DOD’s physicians. In addition, local officials we spoke with in some of our site visits said that servicemembers may be more willing to disclose all of their medical conditions to VA than to DOD because VA could potentially compensate them for all of the conditions. Furthermore, VA officials noted that servicemembers’ health conditions may have changed between the time DOD physicians identified the conditions and VA performed the exam. Finally, DOD and VA officials said that differences in opinions about diagnoses are common among physicians, particularly in the mental health field. For example, they noted that it be can be difficult to distinguish PTSD from anxiety, depression, and other mental health conditions. DOD and VA officials at several pilot sites said that they experienced some logistical challenges integrating VA staff at the military facilities. At a few sites, it took time for VA staff to receive common access cards needed to access the military facilities and to use the facilities’ computer systems. During the time that VA staff did not have access cards, they were unable to access VA computer systems, such as those for establishing the VA claim, requesting exams, and viewing exam results, via DOD’s network. In addition, DOD and VA staff noted several difficulties using the agencies’ multiple information technology (IT) systems to process cases. While the agencies both use the VTA system to manage cases, VA also has IT systems for completing certain tasks, and the military services also have their own case tracking systems. This causes DOD and VA staff to have to enter the same data multiple times into different IT systems. In addition, some VA staff working on military bases reported that using the military services’ computer systems to access VA systems has significantly slowed down computer processing speeds. Finally, DOD and VA staff cannot directly access each others’ systems, making it more cumbersome for case managers to determine the status of servicemembers’ cases. For example, without access to VA’s system for managing exams, DOD board liaisons cannot readily provide servicemembers with information on when or where their exams are scheduled and must contact VA case managers to obtain the information. A few sites we visited were able to address some IT issues. For example, at Fort Polk, VA officials said they were adding a new telecommunications line to provide faster computer processing speeds for their staff. In addition, VA physicians working at military facilities need to be credentialed by DOD before they can begin working on base, which involves verification of their education, license, and clinical history. Some VA officials said that this process could take 1 month or longer to complete. Although many DOD and VA officials we interviewed at central offices and pilot sites felt that the IDES process expedited the delivery of VA benefits to servicemembers, several also indicated that it may increase the amount of time servicemembers are in the military’s disability evaluation process. Data on legacy cases are not sufficiently reliable to determine whether this is the case military-wide, but Army data appear to be sufficiently reliable to allow for some limited analysis. Our analysis of Army pilot and legacy data as of early 2010 shows that compared with legacy cases, active duty cases in the pilot took on average 39 more days to reach the end of the PEB phase—the last step of the DOD disability evaluation process before servicemembers begin transitioning from military service or, if found fit, back to duty. For reserve component cases in the Army, IDES pilot cases took on average 17 more days to reach the end of the PEB phase, compared with legacy cases. It was not possible to conduct this analysis for the other military services because their legacy data lacked information on when servicemembers were referred into the disability evaluation system. Some DOD officials noted that the increased time that servicemembers are in the military’s disability evaluation process means that they must be cared for and managed for a longer period. Officials in our site visits and interviews said that some pilot sites have had challenges housing servicemembers in the IDES, in part due to servicemembers being in the process longer. For some servicemembers in the disability evaluation system, the military services may move them to temporary medical units or, for those needing longer-term medical care or complex case management, to special medical units such as a Warrior Transition Unit in the Army or Wounded Warrior Regiment in the Marine Corps. However, these units were full at a few pilot sites we visited, or members in the IDES did not meet the criteria for entering the special medical units. Where servicemembers remain with their units while going through the disability evaluation system, the units cannot replace them with able-bodied members. Officials at Fort Carson said that this created a challenge for combat units. Because most servicemembers in the IDES did not meet the criteria for entering Warrior Transition Units, combat units had to find another organizational unit to take charge of members in the IDES so they could replace them with soldiers ready and able to deploy to combat areas. In addition, officials at Naval Medical Center San Diego and Fort Carson said that some members are not gainfully employed by their units and, left idle while waiting to complete their disability evaluation process, are more likely to engage in negative behavior, potentially resulting in their being discharged due to misconduct and a forfeiture of disability benefits. We were unable to assess the extent or cause of this problem because the VTA system that tracks servicemembers in the IDES does not capture sufficient detail on reasons for servicemembers dropping out of the IDES, or which organizational unit(s) the servicemember was assigned to while in the IDES. DOD officials also noted that servicemembers benefit from continuing to receive their salaries and benefits while their case undergoes scrutiny by two agencies, though some also acknowledged that these additional salaries and benefits create costs for DOD. DOD and VA plan to expand the IDES to sites worldwide on an ambitious timetable—to 113 sites during fiscal year 2011, a pace of about 1 site every 3 days. Expansion is scheduled to occur in four stages, beginning with 28 sites in the southeastern and western United States by the end of December 2010. DOD and VA have many efforts under way to prepare for IDES expansion. At each site, local DOD and VA officials are expected to work together to prepare for implementation. This includes completing a site assessment matrix—a checklist of information DOD and VA officials at each site should obtain and preparations they should make. While most pilot sites had used a site assessment matrix to prepare for IDES implementation, the agencies completed a significant revision of the matrix in August 2010, and they now request additional information and documentation to address areas where prior IDES sites had experienced challenges. In addition, while during the pilot phase local DOD and VA officials were encouraged to develop written agreements on IDES procedures, the matrix now requests that a written agreement be completed prior to implementing the IDES. Finally, senior-level local DOD and VA officials will be expected to sign the site assessment matrix to certify that a site is ready for IDES implementation. This differs from the pilot phase where, according to DOD and VA officials, some sites implemented the IDES without having been fully prepared. In addition, in September 2010, the military services and VA held preimplementation training conferences for local DOD and VA staff. At the time of our review, the first 28 expansion sites were completing their site assessment matrices. Through the new site assessment matrix and other initiatives under way, DOD and VA are addressing several of the challenges identified in the pilot phase. These include ensuring sufficient exam and case management staff, being prepared to deal with surges in caseloads, addressing exam sufficiency issues, and making adequate logistical arrangements. Ensuring sufficient exam resources: The matrix asks whether a site can complete single exams within the IDES’ 45-day time frame and within DOD’s TRICARE access standards. The matrix asks for detailed information, such as who will conduct the exams (VA, VA contractor, or military providers), where the exams will be conducted, and VA’s anticipated overall volume of disability compensation and pension exams in the area. In addition to the matrix, VA has several initiatives under way to increase resources and expedite exams. VA plans to award a new contract under which it can acquire examiners for sites that do not have sufficient staff to perform exams, such as sites located where VA does not have medical facilities or in rural areas where VA has had difficulty hiring staff. VA has also recently changed its exam policy so that exams performed by nurse practitioners or physician assistants certified to perform disability exams no longer have to be cosigned by a physician, which is expected to expedite completion of more exam reports. Ensuring sufficient VA rating staff: VA officials said that they have hired new staff to replace those that recently left the Baltimore rating office and anticipate hiring a small number of additional staff. Based on caseload projections, they expect that, once the additional staff are hired, the Baltimore office will be close to having sufficient rating staff. Although VA officials said that the Baltimore office conducted ratings for a majority of cases during the IDES pilot phase, they have projected that the workload will be divided almost evenly between the Baltimore and Seattle offices once the IDES is fully expanded worldwide. Ensuring sufficient DOD PEB adjudicators: Air Force officials informed us they added adjudicators for the informal PEB and have since eliminated their case backlog. They are currently adding adjudicators for the formal PEB. Navy PEB officials also said that they are adding adjudicators through activation of reserve component personnel for special work and expected that they would be in place by November 2010. Ensuring sufficient case management staffing: The site assessment matrix also asks whether local facilities will have sufficient trained DOD board liaison staff to meet a 1:20 caseload ratio and sufficient VA case managers to meet a 1:30 caseload ratio. In addition, according to DOD officials, each of the military services is increasing its board liaison staffing levels to achieve 1:20 caseload ratios. VA officials said that they plan to hire an additional 73 case managers. Coping with caseload surges: The matrix asks sites to provide a longer and more detailed caseload history—a 2-year, month-by-month history— as opposed to the 1-year history that DOD based its caseload projections on during the pilot phase. In addition, the matrix asks sites to anticipate any surges in caseloads, such as those due to seasonal trends. Sites are also expected to provide a written contingency plan for dealing with caseload surges. In addition, the matrix asks sites to develop a system for communicating updates, such as information on expected caseload surges, to stakeholders. VA officials also said that the Army has agreed to keep them better informed of deployments that could result in caseload surges. Further, VA officials noted that they are developing a plan for addressing the additional need for examiners during surges, through which VA offices with lower demand for disability exams would send examiners to an IDES site experiencing a surge in exam workloads. Ensuring the sufficiency of single exams: The site assessment matrix asks sites whether all staff who will conduct exams are trained to VA standards and certified by VA to conduct disability compensation and pension exams. In addition, VA has begun the process of revising its exam templates, to better ensure that examiners include the information needed for a VA disability rating decision and enable them to complete their exam reports in less time. Finally, a VA official stated that VA is examining whether it can add capabilities to the VTA system that would enable staff to identify where problems with exams have occurred and track the progress of their resolution. For sites that choose to have military physicians perform the single exams, VA officials said that they have provided materials to DOD from their national training program, and DOD has made these materials accessible on its Web site. To help improve the ability of DOD board liaisons to obtain servicemembers’ medical and personnel records prior to the exam, DOD officials said that they are revising their policies to require reserve component units to provide the records when a reserve member is referred to the IDES. Ensuring adequate logistics at IDES sites: The site assessment matrix asks sites whether they have the logistical arrangements needed to implement the IDES, including necessary facilities, IT, and transportation for servicemembers to exam locations. For example, the matrix asks whether the military treatment facility will address the needs of VA staff for access cards, identification badges, and security clearances, and whether all VA medical providers will be credentialed and privileged to practice at the DOD facility. In terms of IT, the matrix asks whether DOD sites will enable VA staff access to VA information systems needed to perform their duties. The matrix also asks sites to identify IT contacts from both VA and DOD so that they may work together to resolve IT problems. Furthermore, DOD and VA are developing a general memorandum of agreement on IDES information sharing. This agreement is intended to enable DOD and VA staff access to each other’s IT systems, for example, to allow DOD staff to track the status of VA exams. DOD officials also said that they are developing two new IT solutions. According to officials, one system currently being tested would help military treatment facilities better manage their cases. Another IT solution, still at a preliminary stage of development, would integrate the VTA with the services’ case tracking systems so as to reduce multiple data entry. However, in some areas, DOD and VA’s efforts to prepare for IDES expansion do not fully address some challenges or are not yet complete. Ensuring sufficient military physician staffing: While DOD and VA are taking steps to address shortages of examiners, case managers, and adjudicators, they do not yet have strategies or plans to address potential shortages of military physicians for completing MEB determinations. For example, the site assessment matrix does not include a question about the sufficiency of military providers to handle expected numbers of MEB cases at the site, or ask sites to identify strategies for ensuring sufficient military physicians if there is a caseload surge or staff turnover. Ensuring sufficient housing and organizational oversight for IDES participants: Although the site assessment matrix asks sites whether they will have sufficient temporary housing available for servicemembers going through the IDES, the matrix requires only a yes or no response and does not ensure that sites will have conducted a thorough review of their housing capacity prior to implementing the IDES. For example, sites are not asked about the capacity of their medical hold units or special units for wounded servicemembers, or to identify other options if their existing units do not have sufficient capacity for their projected IDES caseload. In addition, the site assessment matrix does not address whether sites have plans for ensuring that IDES participants are gainfully employed or sufficiently supported by their organizational units. Addressing differences in diagnoses: According to a DOD official, as part of its revision of its IDES operations manual, DOD is currently developing guidance on how staff should address differences in diagnoses between military physicians and VA examiners, and between military PEBs and VA disability rating staff. DOD anticipated issuing the new guidance in September 2010, but at the time of our review had not yet done so. In addition, a VA official stated that VA is developing new procedures for identifying cases with potential for multiple mental health diagnoses and will ask VA examiners to review the servicemembers’ medical records and reconcile differing diagnoses. However, since the new guidance and procedures are still being developed, we cannot determine whether they will resolve discrepancies or disagreements. Significantly, DOD and VA do not have a mechanism for tracking disagreements about diagnoses and ratings, and consequently, may not be able to determine whether the guidance sufficiently addresses the discrepancies or whether it requires further revision. As DOD and VA move quickly to implement the IDES worldwide, they have some mechanisms in place to monitor challenges that may arise in the IDES. DOD officials said that they expect to continue holding postimplementation “hotwash” meetings, in which they review individual sites’ implementation. In addition, DOD and VA will continue to regularly collect and report data on caseloads, processing times, and servicemember satisfaction. Furthermore, the new site assessment matrix asks sites to develop plans for VA and DOD local staff to meet weekly for the first 60 to 90 days after implementing the IDES, then no less than monthly to address any identified challenges. VA officials also said that they will continue to prepare a report on an annual basis on challenges in the IDES. To prepare this report, they will obtain input and data from local DOD and VA officials. However, DOD and VA do not have a system-wide monitoring mechanism to help ensure that steps they took to address challenges are sufficient and to identify problems in a more timely basis. For example, they do not collect data centrally on staffing levels relative to caseload. Consequently, despite efforts to acquire additional staff, as local sites experience staffing turnover in the future, DOD and VA central offices may not become aware that a site is short-staffed until their monitoring reports show lengthy processing times. As a result, DOD and VA may be delayed in taking corrective action, since it takes time to assess what types of staff are needed at a site and to hire or reassign staff. In addition, without information on when or how often other problems occur, such as insufficient exam summaries or disagreements about diagnoses, DOD and VA managers may not be able to target additional training or guidance where needed. Furthermore, while DOD and VA report data on processing times by phase of the process, military treatment facility, and military service, their monitoring reports do not show processing times or caseloads for each VA rating office and each of the five PEBs (three Army and one each for the Navy and Air Force), limiting their ability to identify if specific rating or PEB offices are experiencing challenges. DOD and VA also lack mechanisms or forums for systematically sharing information on challenges as well as best practices. For example, while the site assessment matrix indicates that sites are expected to hold periodic meetings to identify local challenges, DOD and VA have not established a process for local sites to systematically report those challenges to DOD and VA management and for lessons learned to be systematically shared system-wide. During the pilot phase, VA surveyed pilot sites on a monthly basis about challenges they faced in completing single exams. Such a practice has the potential to provide useful feedback if extended to other IDES challenges. By merging two duplicative disability evaluation systems, the IDES shows promise for expediting the delivery of VA benefits to servicemembers leaving the military due to a disability. Servicemembers who proceed through the process are able to leave the military with greater financial security, since they receive disability benefits from both agencies shortly after discharge. Further, having both DOD and VA personnel involved in reviewing each disability evaluation may result in a more thorough scrutiny of cases and informed decisions on behalf of servicemembers. However, piloting of the system at 27 sites has revealed several significant challenges that require careful management attention and oversight before DOD and VA expand the system military-wide. DOD and VA are currently taking steps to address many of these challenges, and the agencies have developed a site implementation process that encourages local DOD and VA officials to identify and resolve local challenges prior to transitioning to the new system. However, given the agencies’ ambitious implementation schedule—more than 100 sites in a year—it is unclear whether all of these challenges will be fully dealt with before DOD and VA deploy the integrated system to additional military facilities. For example, it is unclear whether sites will have sufficient military physicians to complete key steps of the process in a timely manner. Insufficient staffing of any one part of the process is likely to lead to bottlenecks, delaying not only servicemembers’ receipt of disability benefits, but also their separation from the military and reentry into civilian life. In addition, DOD’s preparations of sites for the IDES do not ensure that military facilities have adequate capacity or plans for housing and providing organizational oversight over servicemembers in the IDES, who potentially could remain at the locations for extended periods of time. Furthermore, while integrating VA medical exams into DOD’s disability evaluation system eliminates duplicative exams, it raises the potential for there to be disagreements about diagnoses of servicemembers’ conditions, with implications for servicemembers’ disability ratings and their DOD disability compensation. While DOD is developing guidance to address such disagreements, it is important that the agencies have a thorough understanding of how often and why these disagreements occur and continually review whether their new guidance adequately addresses this issue so as to be able to make improvements where needed. Successful implementation of any program requires effective monitoring. DOD and VA currently have mechanisms to track numbers of cases processed, timeliness, and servicemember satisfaction, but they do not routinely monitor factors—such as staffing levels relative to caseload, disagreements about diagnoses, and insufficient exam summaries—that can delay the process. In addition, they do not monitor timeliness and caseloads for some of the key IDES offices, namely each VA rating office and each PEB. Ultimately, the success or failure of the IDES will depend on DOD and VA’s ability to sufficiently staff local sites, the VA rating offices, and the PEBs, and to resolve other challenges not only at the initiation of the transition to IDES but also on an ongoing, long-term basis. By not monitoring staffing and other risk factors, DOD and VA may not be able to ensure that their efforts to address these factors are sufficient or to identify problems as they emerge and take immediate steps to address them before they become major problems. To ensure that the IDES is sufficiently staffed and that military treatment facilities are prepared to house personnel in the IDES, we recommend that the Secretary of Defense direct the military services to conduct thorough assessments prior to each site’s implementation of the IDES of the following three issues: the adequacy of staffing of military physicians for completing MEB determinations at military treatment facilities; contingency plans should be developed to address potential staffing shortfalls, for example, due to staff turnover or caseload surges; the availability of housing for servicemembers in the IDES at military facilities; alternative housing options should be identified if sites do not have adequate capacity; and the capacity of organizational units to absorb servicemembers undergoing the disability evaluation; plans should be in place to ensure servicemembers are appropriately and constructively engaged. To improve their agencies’ ability to resolve differences about diagnoses of servicemembers’ conditions, and to determine whether their new guidance sufficiently addresses these disagreements, we recommend that the Secretaries of Defense and Veterans Affairs take the following two actions: conduct a study to assess the prevalence and causes of such establish a mechanism to continuously monitor disagreements about diagnoses between military physicians and VA examiners and between PEBs and VA rating offices. To enable their agencies to take early action on problems at IDES sites postimplementation, we recommend that the Secretaries of Defense and Veterans Affairs develop a system-wide monitoring mechanism to identify challenges as they arise in all DOD and VA facilities and offices involved in the IDES. This system could include: continuous collection and analysis of data on DOD and VA staffing levels, sufficiency of exam summaries, and diagnostic disagreements; monitoring of available data on caseloads and case processing time by individual VA rating office and PEB; and a formal mechanism for agency officials at local DOD and VA facilities to communicate challenges and best practices to DOD and VA headquarters offices. We provided a draft of this report to DOD and VA for review and comment. The agencies provided written comments, which are reproduced in appendixes III and IV. DOD and VA generally concurred with our recommendations. Each agency also provided technical comments, which we incorporated as appropriate. DOD concurred with our recommendation to ensure that, before the IDES is implemented at each new site, a thorough assessment be done of the site’s staffing adequacy, the availability of housing for servicemembers in the IDES, and the capacity of organizational units to appropriately and constructively engage servicemembers in the IDES. However, DOD stated that the IDES site assessment matrix addresses plans to ensure that servicemembers are gainfully employed while in the IDES. We changed our report to more clearly indicate that the site assessment matrix does not, in fact, address such plans. We believe that specifically identifying this in the matrix could help local DOD officials, including servicemembers’ unit commanders, focus on ensuring gainful employment or other support. DOD concurred, and VA concurred in principle, with our recommendation to study and establish mechanisms to monitor diagnostic differences. VA identified a plan to study the prevalence and causes of diagnostic differences and determine by July 1, 2011, whether mechanisms are needed. DOD stated that it expects, as diagnostic differences are monitored and studied, that the agencies will address and resolve many of the issues identified in our report. We agree that the planned study could yield valuable insights on how to resolve diagnostic differences but emphasize that continuous monitoring of such differences over a period of time may be needed to assess the extent and nature of such differences, as well as the success of any actions to address them. Both agencies concurred with our recommendation to develop monitoring mechanisms to help them take early actions on problems that may arise at IDES sites postimplementation. VA stated that the VTA system currently has data that can be monitored by PEB and VA rating site, and DOD said its weekly monitoring report could be modified to present these data. Also, VHA plans to monitor the IDES exam workload, including numbers of exam requests compared with forecasts, exam timeliness, and insufficient exams. Implementation is scheduled for December 31, 2010. In terms of identifying site implementation problems for quick resolution, DOD stated that the military services bring sites’ challenges and best practices to the Disability Advisory Council, a DOD body that includes VA representatives, which is being re-chartered as part of the Benefits Executive Council, a subgroup of the VA-DOD Joint Executive Council. VA and DOD’s plans sound promising and consistent with our recommendations provided that they allow for ongoing monitoring of site staffing levels and create a systematic way for local DOD and VA staff to communicate their challenges or best practices, enabling the agencies to identify and address problems at an early stage. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of Veterans Affairs, and other interested parties. The report is also available at no charge on the GAO Web site at www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7215 or at bertonid@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members who made key contributions in this report are listed in appendix V. In conducting our review of the integrated disability evaluation system (IDES) piloted by the Departments of Defense (DOD) and Veterans Affairs (VA), our objectives were to examine (1) the results of DOD and VA’s evaluation of the IDES pilot, (2) challenges in implementing the piloted system to date, and (3) DOD and VA plans to expand the piloted system and whether those plans adequately address potential challenges. We conducted this performance audit from November 2009 to December 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To address objective 1, we reviewed DOD and VA policy guidance, reports, and analysis plans to determine how the agencies are evaluating the pilot’s effectiveness and to obtain information on their results. We also reviewed the relevant requirements of the National Defense Authorization Act of 2008 as it pertains to this review. In addition, we interviewed officials responsible for the evaluation at DOD’s Office of the Deputy Under Secretary of Defense for Wounded Warrior Care & Transition Policy (WWCTP), DOD’s Defense Manpower Data Center, and two organizations that DOD has contracted with to perform the evaluation—Booz Allen Hamilton and Westat. We then tested the reliability of the data the agencies are using for their evaluation—data from surveys of servicemembers, IDES case data from the Veterans Tracking Application (VTA) system, and legacy case data that DOD’s WWCTP obtained from the military services. Finally, we conducted some analyses of IDES and legacy case data for the Army to compare the two systems on timeliness and appeal rates, using elements of the data that we found to be reliable, but these comparisons have limitations and are not generalizable to other military services. The sections below describe our data reliability work and our analysis of Army data in further detail. DOD and VA have been surveying servicemembers going through the IDES pilot, and a comparison group of veterans who went through the standard “legacy” disability evaluation system, to determine whether the IDES pilot has improved servicemember satisfaction. The agencies survey all servicemembers in the IDES pilot at three points in time—following their completion of the medical evaluation board (MEB) of the disability evaluation process, completion of the physical evaluation board (PEB), and during the transition phase. To create a comparison group, the agencies sampled veterans who have been through the legacy system at current pilot sites. Their sampling methods were designed to ensure that the pilot and legacy groups were of comparable size and had similar proportions of servicemembers found unfit for duty. DOD and VA are analyzing the differences between the pilot and legacy groups’ average responses on four “survey composites,” or general categories composed of several survey questions: overall experience, fairness, DOD board liaison officer customer service, and VA case manager customer service. We reviewed the reliability of surveys DOD and VA are using to obtain information on satisfaction levels by examining their survey design and analysis. To do so, we interviewed officials at DOD’s Defense Manpower Data Center and Westat responsible for implementing the survey, as well as officials at WWCTP and Booz Allen Hamilton responsible for designing the survey and analyzing the survey data. We also reviewed the survey instruments, response rates, data analysis plans, analysis results, and survey data as of February 28, 2010. We found DOD’s survey methodology—and the data derived using that methodology—to be reliable for purposes of comparing servicemembers’ satisfaction levels in the IDES and legacy disability evaluation systems. DOD and VA are collecting data on IDES pilot cases through the VTA and are using these data to conduct ongoing monitoring of case processing times and appeal rates, with the results presented in weekly reports. VA manages VTA, but evaluation of the data is primarily conducted by staff at DOD’s WWCTP and Booz Allen Hamilton. For their August 2010 interim report to Congress, DOD staff created a data set used to compare pilot and legacy processing times and appeal rates. This data set included IDES pilot cases as of February 28, 2010, with the earliest case started in November 2007. The data set also included data, as of January 31, 2010, on legacy cases started between fiscal years 2005 and 2009 at the first 21 sites operating the IDES pilot, prior to pilot implementation. The agencies also matched legacy case data from each of the military services with VA data, in order to capture additional processing time it took for servicemembers to navigate the VA disability claims process. Because the data set was created from February 2010 pilot data, it only included about one-third of the IDES pilot cases that were completed as of August 29, 2010. The February 2010 data set included cases from 17 of the 27 current pilot sites, and 7 of the 17 sites—including some of the pilot sites with the largest caseloads such as Fort Carson and Camp Lejeune—had fewer than 20 completed cases each when the data set was created. To assess whether the data DOD and VA are using for their monitoring and evaluation are reliable, we obtained the early 2010 data set that the agencies planned to use for their evaluation report to Congress. We restricted our reliability assessments to the specific variables that the agencies used in their analyses. Following steps detailed below, we found that the IDES pilot case data were sufficiently reliable for our analyses, but that the legacy case data were incomplete with respect to data elements key to measuring case processing time and appeal rates. To assess the reliability of the agencies’ IDES pilot data, we interviewed VA database managers responsible for VTA, reviewed VTA manuals and guidance, conducted electronic tests of the data and, for a small, random sample of cases, checked the data against case files. Through our interviews and document reviews, we concluded that the agencies have sufficient internal controls to give reasonable assurance that the data are complete. Our electronic testing of the data generally found low rates of missing data and errors in completed IDES cases. In these tests, we considered a data element to be sufficiently reliable for purposes of using in our report if 15 percent or less of the data were missing or had errors. Using this standard, we determined that one data element for IDES cases—the date that servicemembers separated from the military—was not reliable, because: (1) it was missing in 19 percent of completed cases and (2) in cases where the date was present, more than 30 percent appeared to have errors (for example, the date was before a step of the process that it should have followed). We also conducted a trace-to-file process to determine whether date fields in the VTA system were an accurate reflection of the information in the IDES case files. Specifically, we compared 12 date fields in the VTA against a random sample of paper files for 54 completed cases: 24 from the three Army PEBs, 10 from the Air Force PEB, and 20 from the Navy PEB (10 Navy cases and 10 Marine Corps). In comparing these dates, we allowed for a 10 percent discrepancy in dates—i.e., a difference of 2 to 10 days, depending on the date and phase of the process—to allow for the possibility that dates may have been entered into the database after an event took place. The trace-to-file process resulted in an overall accuracy rate of 84 percent. For five data elements key to DOD and VA’s evaluation of the IDES pilot, we found that VTA dates reflected dates in the case files 85 percent of the time or better. For six key data elements—i.e., the end dates of the exam and MEB phases, the start of the PEB phase, the date a VA rating request was made, the date of the final disposition, and the date servicemembers received VA benefits—the VTA dates matched case file dates between 70 to 85 percent of the time. Although we considered these dates sufficiently reliable to include in this report, these dates should be interpreted with more caution. The separation date was accurate less than 70 percent of the time and did not meet our standards of reliability. To assess the reliability of the legacy data that the agencies planned to compare the IDES pilot against, we tested the data electronically, and found that data for key dates and appeals indicators had significant gaps because the services did not collect the same information for legacy cases that were collected for pilot cases (see table 2). For example, only Army cases had information on when servicemembers were referred to the MEB process. In addition, the legacy data did not include the date on when servicemembers received VA benefits—which is necessary for measuring the full length of the legacy process. Without sufficient data on the beginning (when servicemembers were referred into the system) or end of the process (when they received VA benefits), we concluded that the full case processing time in the legacy system cannot be known. We also concluded that comparisons could not be made between the legacy and IDES pilot on appeal rates because only Army and Air Force cases had information on whether servicemembers appealed the informal PEB decisions. In addition to reviewing the reliability of the IDES pilot and legacy data, we reviewed how DOD and VA are using the data for their comparisons of the two disability evaluation systems. Through interviews with officials at DOD’s WWCTP and Booz Allen Hamilton and documents they provided us, we understand that DOD planned to address gaps in the legacy data by: (1) approximating the referral dates in Air Force, Marine Corps, and Navy cases using Army data and (2) using dates when cases were ready to be rated by VA to approximate the end of the process. Specifically, to approximate referral dates, they said they would use the average time for Army cases between when the servicemember was referred and when the MEB documentation identifying the servicemember’s potentially unfitting medical conditions (i.e., the narrative summary) was completed, which they calculated to be 60 days. For Navy and Marine Corps cases, they then subtracted 60 days from the date of the narrative summary to estimate a referral date and, for Air Force cases, they did so from the date of the MEB decision. However, because only 11 percent of Army legacy cases had a narrative summary date, the estimate of 60 days is based on a small number of cases (see table 3). To address the lack of data on the date VA benefits were delivered, DOD planned to use the date that VA determined a case was ready to be rated to approximate the end of the process, though this would underestimate the length of time it took to deliver VA benefits in the legacy process. For objective 1, we presented information on average processing time in the IDES, both overall and by military service, using information presented by DOD and VA in their weekly monitoring reports. Where information was not available in the weekly reports, we conducted our own analysis using the early 2010 data set that DOD and VA intended to use for their report to Congress. Specifically, we used these data to determine the proportion of pilot cases meeting the 295-day goal for active duty servicemembers and the 305-day goal for reserve servicemembers. In addition, although limitations in the legacy data preclude reliable comparisons between the IDES pilot and legacy systems for all the military services, the Army legacy data on when servicemembers were referred into the IDES were sufficiently complete to make some limited comparisons. Specifically, we analyzed Army legacy data to determine how long the legacy process took, on average, between when servicemembers were referred to the process and when VA was ready to conduct the disability rating. We limited our analysis to cases in which a VA claim was filed between 2006 and 2009 because data on when VA was ready to conduct the rating was missing for a substantial number of cases where the VA claim was filed in 2005 and 2010. We compared this legacy average with the total pilot case processing time through to delivery of VA benefits, but we noted that the legacy average does not account for time for VA to complete the rating and deliver the benefits. We also analyzed Army data on appeals in order to illustrate the limitations of DOD’s plan to compare only appeals to the informal PEB in the pilot and legacy systems and not take into account appeals of rating decisions to VA. We conducted this analysis using the legacy data and pilot case data as of early 2010, since DOD and VA’s weekly reports do not contain information on appeals to VA. To identify challenges in implementing the IDES during the pilot phase, we visited 10 of the 27 military treatment facilities participating in the pilot. At the site visits, we interviewed officials involved in implementing the IDES from both DOD and VA, including military facility commanders and administrators, DOD board liaisons, military physicians involved in MEB determinations, DOD legal staff, VA case workers, VA or contract examiners, and administrators at VA medical clinics and VA regional offices. We selected the 10 facilities to obtain perspectives from sites in different military services and geographical regions and that varied in terms of disability evaluation caseloads and how their single exams were conducted (by DOD, VA, or a VA contractor) (see table 4). We also interviewed various offices at DOD and VA involved in implementing the IDES pilot. At DOD, this included WWCTP; Office of the Assistant Secretary of Defense for Health Affairs; Office of the Assistant Secretary of Defense for Reserve Affairs; Air Force Physical Disability Division; Army Physical Disability Agency; Navy Physical Evaluation Board; Office of the Air Force Surgeon General; Army Medical Command; and Navy Bureau of Medicine and Surgery. At VA, we interviewed officials in the Veterans Benefits Administration, Veterans Health Administration, and VA/DOD Collaboration Service. Furthermore, we reviewed relevant documents, including DOD and VA policies and guidance and records of “hotwash” meetings, which DOD and VA held shortly after implementing the IDES at pilot sites to identify implementation successes and challenges. We also reviewed data on processing times for the single exams, MEB determinations, informal PEB decisions, and VA ratings, as reported in the agencies’ weekly monitoring reports. In addition, we reviewed relevant federal laws and regulations. To determine whether the IDES process extended the time that servicemembers remained in military service, we analyzed the legacy and pilot case data from the early 2010 data set, but we identified several limitations with the data. As noted earlier, the date servicemembers separated from the military was missing for 19 percent of completed IDES pilot cases. Further, as shown in table 5, only Air Force cases contained data on the separation date in the legacy data. Also noted earlier, only the Army legacy data contained information on when servicemembers were referred into the legacy process. As a result, for Army cases, we compared the average length of time it took cases to reach a final PEB decision in the legacy and pilot, since this date was sufficiently complete in both the legacy and pilot data. The PEB decision is the last phase of the disability evaluation process before a servicemember either begins to transition from military service, or if they are found fit, returns to their unit. To identify the agencies’ preparations for worldwide expansion of the IDES, we reviewed documents on DOD and VA’s expansion strategy, their site assessment matrix, and weekly monitoring reports which, beginning in July 2010, tracked key implementation time frames, both nationally and at individual military treatment facilities. Our interviews with officials involved in the pilot at DOD, VA, and each of the military services also provided us with information on the agencies’ expansion plans. We also reviewed relevant federal laws and regulations. We determined the adequacy of the agencies’ planning efforts by assessing whether their plans addressed the challenges we had identified in objective 2. We also determined whether the plans incorporated internal controls described in GAO’s Standards for Internal Control in the Federal Government and best practices for program implementation identified in academic literature. The figures below show case processing times in the IDES pilot for reserve component servicemembers. Figure 11 shows the average number of days it took to complete the process—i.e., to deliver VA benefits to reserve component servicemembers, as of August 2010. Figure 12 shows the percentage of cases that met the DOD and VA goal to deliver VA benefits within 305 days, as of February 2010. Figures 13-15 show the average length of time it took, as of August 2010, to complete phases of the IDES process—i.e., the single exam, the MEB documentation, and the informal PEB decision, respectively—each of which have taken longer, on average, than the goals established by DOD and VA. Michele Grgich (Assistant Director), Yunsian Tai (Analyst-in-Charge), Jeremy Conley, and Greg Whitney made significant contributions to this report. Walter Vance and Vanessa Taylor provided assistance with research methodology and data analysis. Bonnie Anderson, Rebecca Beale, Mark Bird, Brenda Farrell, Valerie Melvin, Patricia Owens, and Randall Williamson provided subject matter expertise. Susan Bernstein and Kathleen van Gelder provided writing assistance. James Bennett provided graphics assistance. Roger Thomas provided legal counsel.
Since 2007, the Departments of Defense (DOD) and Veterans Affairs (VA) have been testing a new disability evaluation system designed to integrate their separate processes and thereby expedite veterans' benefits for wounded, ill, and injured servicemembers. Having piloted the integrated disability evaluation system (IDES) at 27 military facilities, they are now planning for its expansion military-wide. Part of the National Defense Authorization Act for Fiscal Year 2008 required GAO to report on DOD and VA's implementation of policies on disability evaluations. This report examines: (1) the results of the agencies' evaluation of the IDES pilot, (2) challenges in implementing the IDES pilot to date, and (3) whether DOD and VA's plans to expand the IDES adequately address potential future challenges. GAO analyzed data from DOD and VA, conducted site visits at 10 military facilities, and interviewed DOD and VA officials. In their evaluation of the IDES pilot as of February 2010, DOD and VA concluded that it had improved servicemember satisfaction relative to the existing "legacy" system and met their established goal of delivering VA benefits to active duty and reserve component servicemembers within 295 and 305 days, respectively, on average. While these results are promising, average case processing times have steadily increased since the February 2010 evaluation. At 296 days for active duty servicemembers, as of August 2010, processing time for the IDES is still an improvement over the 540 days that DOD and VA estimated the legacy process takes to deliver VA benefits to members. However, the full extent of improvement of the IDES over the legacy system is unknown because (1) the 540-day estimate was based on a small, nonrepresentative sample of cases and (2) limitations in legacy case data prevent a comprehensive comparison of timeliness, as well as appeal rates. Piloting of the IDES has revealed several implementation challenges that have contributed to delays in the process, the most significant being insufficient staffing by DOD and VA. Staffing shortages were severe at a few pilot sites that experienced caseload surges. For example, at one of these sites, due to a lack of VA medical staff, it took 140 days on average to complete one of the key features of the pilot--the single exam--compared with the agencies' goal to complete this step of the process in 45 days. The single exam posed other challenges that contributed to process delays, such as exam summaries that did not contain sufficient information for VA to determine the servicemember's benefits and disagreements between DOD and VA medical staff about diagnoses for servicemembers' medical conditions. Cases with these problems were returned for further attention, adding time to the process. Pilot sites also experienced logistical challenges, such as incorporating VA staff at military facilities and housing and managing personnel going through the process. As DOD and VA prepare to expand the IDES worldwide, they have made preparations to address a number of these challenges, but these efforts have yet to be tested, and not all challenges have been addressed. To address staffing shortages and ensure timely processing, VA is developing a contract for additional medical examiners, and DOD and VA are requiring local staff to develop written contingency plans for handling surges in caseloads. However, the agencies lack strategies for meeting some key challenges, such as ensuring enough military physicians to handle anticipated workloads. They also do not have a comprehensive monitoring plan for identifying problems as they occur--such as staffing shortages and insufficiencies in medical exams--in order to take remedial actions as early as possible. GAO is making several recommendations to improve DOD and VA's planning for expansion of the new disability evaluation system, including developing a systematic monitoring process and ensuring that adequate staff is in place. DOD and VA generally concurred with GAO's recommendations and provided technical comments that GAO incorporated into the report as appropriate.
To be eligible for an advance tax refund in 2001, taxpayers (1) had to have a federal income tax liability on their tax year 2000 return, (2) could not be claimed as a dependent on someone else’s tax year 2000 return, and (3) could not be a nonresident alien. The amount of advance tax refund that taxpayers could receive depended on their filing status and the amount of taxable income shown on their tax year 2000 return. The maximum refund amount was $600 for a married couple filing jointly or a qualified widow(er), $500 for a head of household, and $300 for a single individual or married person filing separately. Before issuing the advance tax refund checks, IRS was to send every individual who filed a return for tax year 2000 a notice either informing them of the refund amount they were to receive and the week in which they were to receive it or telling them that they were ineligible for a refund and why. Prior to sending a disbursement request to FMS, IRS was to reduce the amount of the refunds for any delinquent federal taxes owed by the taxpayers. FMS then issued the advance refund checks for IRS with assistance from the Defense Finance and Accounting Service (DFAS).Before issuing these checks, FMS was to reduce the amount of the checks by the amount of certain other debts owed by the taxpayers, such as delinquent child support. These reductions by IRS and FMS are referred to as “offsets.” IRS sent out the initial advance tax refund notices to 112 million taxpayers by mid-July 2001. Most advance refund checks were then to be issued over a 10-week period from the week of July 23, 2001, through the week of September 24, 2001, based on the last two digits of a taxpayer’s Social Security number (SSN). For example, taxpayers with 00 through 09 as the last two digits of their SSN were to receive their checks the week of July 23, 2001, while taxpayers with 90 through 99 as the last two digits of their SSN were to receive their checks the week of September 24, 2001. Taxpayers who filed their tax year 2000 returns after April 16 were to receive their advance tax refund checks later in the fall. All checks were to be issued by December 31, 2001. Taxpayers who received an advance refund check for the full $600, $500, or $300 based on their tax year 2000 filing status, as well as taxpayers who would have received these amounts except for having all or part of their check offset, were not eligible for a rate reduction credit on their 2001 return. However, taxpayers who were entitled to an advance refund check but either did not receive a check because IRS did not have their current address, for example, or did not receive the maximum amount for their filing status because they did not have enough taxable income in 2000, may have been eligible for a rate reduction credit on their tax year 2001 return. In addition, taxpayers who were not entitled to an advance tax refund, such as those who did not have taxable income in 2000, may have been entitled to a rate reduction credit provided they had taxable income in 2001. We obtained, from IRS and FMS, statistical information on the number and dollar amount of advance tax refund checks issued, the number and dollar value of refund checks that were offset for federal tax debts and for debts other than federal taxes, and the cost to IRS and FMS for administering the program. We did not independently verify the statistical and cost data provided by IRS and FMS. However, as discussed later, a sampling of advance tax refund transactions done as part of our audit of IRS’s fiscal year 2001 financial statements indicated that there were no material errors requiring audit adjustments. In addition, based on sampling done during its review of the advance tax refund program, TIGTA concluded that IRS had accurately calculated and issued advance refunds to eligible taxpayers. To assess implementation of the advance tax refund program, we collaborated with TIGTA staff who reviewed various aspects of the program, such as the accuracy of IRS’s computer programming and taxpayer eligibility for advance refunds;analyzed advance tax refund procedures, including IRS procedures for meeting expected increases in telephone demand and FMS procedures for handling undeliverable refund checks, refund offsets, and claims for nonreceipt of refunds; discussed with officials of IRS’s Office of the Taxpayer Advocate, that office’s involvement in the advance tax refund program; and obtained statistics on undeliverable advance refund notices and checks; taxpayer contacts with IRS concerning advance tax refunds and the level of telephone service provided by IRS during the advance tax refund period; claims for nonreceipt of refunds; and duplicate, altered, and counterfeit advance tax refund checks. To determine the effect of the advance tax refunds and related rate reduction credit on the 2002 tax filing season, we collaborated with TIGTA staff who determined if IRS properly identified and referred for correction returns with rate reduction credit errors during the 2002 filing season; reviewed information on the rate reduction credit on IRS’s Web site and in the instructions IRS provided taxpayers for preparing income tax returns to be filed in 2002; analyzed statistics on (1) the number and type of rate reduction credit errors on tax returns filed in 2002, (2) the demand for telephone assistance during the 2002 filing season, and (3) the level of telephone service provided by IRS during that period; and discussed with IRS officials their procedures for handling rate reduction credit errors and responding to any increased demand for telephone assistance. We used the results of our work and TIGTA’s to identify observations that IRS may find useful if it is required to issue advance tax refunds or encounters a similar management challenge in the future. We did our work at IRS’s National Office in Washington, D.C.; the IRS campuses in Atlanta, Ga., and Philadelphia, Pa.; IRS’s Wage and Investment Division and Joint Operations Center in Atlanta, Ga.; and FMS’s National Office in Washington, D.C. Our work was done between July 2001 and May 2002 in accordance with generally accepted government auditing standards. We obtained written comments on a draft of this report from the Commissioner of Internal Revenue and the Commissioner of FMS. Their comments are discussed near the end of this report and are in appendixes IV and V, respectively. Between July and December 2001, about $36.4 billion in advance tax refunds were issued to about 86 million taxpayers. Another $3 billion in advance tax refunds was offset for various debts owed by taxpayers, most of which was for delinquent federal taxes. According to IRS and FMS officials, this initiative was accomplished at a cost of at least $138 million. IRS, through FMS, mailed out advance tax refunds according to a schedule that called for taxpayers to begin receiving checks the week of July 23, 2001. Altogether, almost 92 million taxpayers were to receive about $39 billion in advance tax refunds, with most of the checks to be received during the first 10 weeks of the program. However, primarily because some checks were offset to recover past debts and, to a lesser extent, because other checks were returned undeliverable, about 86 million taxpayers received about $36.4 billion in advance refunds. The notice IRS sent to taxpayers who were eligible to receive an advance tax refund included a statement that the amount of the refund could be reduced by any outstanding debt owed, such as past due federal and state income taxes or child support. In that regard, for any taxpayer whose account involved a federal tax debt, IRS was to offset the advance tax refund, either in whole or in part, to collect the debt. In addition, FMS was to offset the advance tax refunds to collect other types of debt via the Treasury Offset Program. Taxpayers whose advance refunds were offset, either in whole or in part, were to receive a notice explaining the offset. Because IRS and FMS have no effective way of associating notices from IRS with checks issued by FMS, notices regarding IRS offsets would have been sent to taxpayers separate from the advance refund checks. On the other hand, notices regarding FMS offsets could be mailed with the checks. According to data obtained from IRS and FMS, the two agencies offset the advance tax refunds by almost $3 billion to collect various types of taxpayer debt. IRS offset about $2.5 billion to recover delinquent federal tax, and about 5.4 million taxpayers had their entire advance tax refund offset due to a federal tax delinquency. FMS offset about $468 million for the following reasons: $261.5 million for delinquent child support. $190.8 million for federal debts other than delinquent taxes. $15.7 million for delinquent state taxes. Some taxpayers also had their advance tax refunds offset by as little as 1 cent for interest that was owed. According to IRS, this resulted from its failure to include accrued interest in computer programming that IRS implemented in January 2001 to write off small dollar amounts of tax owed. An IRS official said that the computer programming has since been corrected. However, according to the official, IRS did not track the number of taxpayers who were affected. According to an IRS official, it cost IRS about $104 million to administer the advance tax refund program through the end of fiscal year 2001. Included in these costs were $36 million for contract costs, $33 million for postage, $30 million for labor, and $5 million for printing. According to an FMS official, FMS incurred about $34 million in costs to issue the checks on behalf of IRS, including the assistance provided by DFAS. In order to administer the advance tax refund program, IRS, among other things, had to develop the computer programming necessary to determine taxpayer eligibility for a refund and the amount of refund, including any related federal tax offset; arrange for printing and mailing notices informing taxpayers whether or not they would receive a refund; respond to telephone calls and correspondence from taxpayers concerning the refund; resolve undelivered and returned refund checks; and prepare adjustment notices for refunds that were offset due to federal tax debts. According to an IRS official, it took about 3 months between March and June 2001 to develop the necessary computer programming to implement the advance tax refund program and to arrange for printing and mailing advance tax refund notices. IRS temporarily reassigned staff from other functions to assist with taxpayer telephone calls and correspondence related to the advance tax refunds. For example, IRS recalled furloughed staff at its forms distribution centers to assist taxpayers who called IRS with questions about the advance refund that were relatively easy to answer. In addition, IRS used submission processing staff from its Philadelphia campus to help respond to over 90,000 written inquiries from taxpayers concerning the advance tax refunds. In its reports on IRS’s implementation of the advance tax refund program in 2001, TIGTA concluded that (1) most taxpayers received accurate and timely notification of their advance refunds, (2) advance refunds were accurately calculated and issued to eligible taxpayers, and (3) IRS took proper actions to prevent the issuance of advance tax refunds after December 31, 2001. Similarly, our review of a sample of 80 advance tax refund transactions disclosed no material errors requiring adjustments in the advance refund sample. We determined that all of the taxpayers in our sample were eligible for the advance refund, all of those refunds were calculated correctly, there were no instances where a taxpayer had a debt recorded in the Treasury Offset Program that should have been offset but was not, and there were no instances in which the taxpayer had an outstanding tax debt that should have been offset by IRS but was not. Based on our sample results, we estimate that the number of errors requiring adjustment in the population of all advance tax refunds was 0 + 4.5 percent. Despite this significant accomplishment, TIGTA’s work and ours identified the following problems related to implementation of the advance tax refund program in 2001: A computer programming problem resulted in 523,000 taxpayers receiving inaccurate advance refund notices. About 5.3 million taxpayers received untimely advance refund notices because of IRS’s procedures for processing returns and the way programming was developed to generate advance refund notices. Over 2 million advance refund notices and about 548,000 advance refund checks valued at about $174 million were returned undeliverable due to incorrect addresses. As of late October 2001, IRS had identified about 34,000 of these checks that were not reissued in a timely manner even though it had updated address information. Taxpayers who called IRS during the first 3 months of the advance tax refund period (July through Sept.) had greater difficulty reaching IRS assistors than did taxpayers who called during the same timeframe in 2000 or during the 2001 tax filing season. As noted earlier, the maximum amount of a taxpayer’s advance refund was to be $600, $500, or $300 depending on the taxpayer’s filing status. However, the actual amount of the advance refund was limited to the lesser of (1) 5 percent of the taxable income on the taxpayer’s tax year 2000 return and (2) the net income tax from the tax year 2000 return after subtracting nonrefundable credits, such as the credit for child and dependent care expenses, child tax credit, credit for the elderly, and education credit. TIGTA found that IRS had initially erred in developing its computer program for the advance tax refunds by not limiting the refund amounts to the net income tax after nonrefundable credits. As a result, TIGTA determined that about 523,000 taxpayers had been sent inaccurate notices indicating that they would receive larger advance refund checks than they were entitled to receive. TIGTA informed IRS of this programming error on July 3, 2001, and IRS was able to correct the programming before any erroneous advance tax refunds were issued—thus avoiding overpayments of about $118 million. IRS also sent corrected notices to the affected taxpayers. According to TIGTA, IRS management determined that the error arose because testing of the programming only verified that the computer output matched the programming logic. The testing did not verify that the programming logic was in accordance with the requirements of the law. TIGTA also determined that about 5.3 million taxpayers who had filed their tax year 2000 returns by the April 16, 2001, filing deadline would have delays of from 1 week to 9 weeks in receiving their advance refund notices. According to TIGTA, this delay prevented taxpayers from being timely notified of their advance refunds and may have caused additional calls to IRS from taxpayers wanting to know when they would receive their advance refund. TIGTA attributed the delays to the following two reasons: IRS’s normal procedure is to process income tax returns filed by taxpayers who are due to receive a tax refund before processing income tax returns filed by other taxpayers. Thus, many nonrefund returns filed by April 16, 2001, had not been processed by the time IRS prepared the initial list of taxpayers who were to receive advance tax refund notices. When IRS developed the programming to generate the advance tax refund notices for taxpayers whose returns had yet to be processed when the initial list was prepared, it decided to mail the notices to these taxpayers just before they were to receive their advance tax refund checks, rather than mailing the notices as soon as their tax returns were processed. In response to TIGTA’s finding, IRS issued a news release explaining that some taxpayers might experience a delay in receiving their advance tax refund notices. One challenge that IRS encountered throughout the implementation of the advance tax refund program involved undeliverable advance tax refund notices and checks due to incorrect addresses. Undeliverable advance refund notices were to be returned to IRS’s Philadelphia campus, and undeliverable advance refund checks were to be returned to the FMS payment center from which they were issued. Through December 31, 2001, over 2 million advance tax refund notices were returned undeliverable to IRS (about 1.6 percent of 125 million notices sent), including about 1.2 million notices sent to taxpayers who were to receive an advance refund and about 900,000 notices sent to taxpayers who were ineligible for an advance refund. According to an IRS official, the undeliverable notices were sorted and counted by type of notice and then destroyed. Because these notices were sent to taxpayers via first class mail, the Postal Service was to forward notices for which taxpayers had provided an address change. Therefore, IRS decided that it would not be cost-effective to follow up on the undeliverable notices. According to IRS officials, if a notice was undeliverable, a check would still have been sent to the same address, unless IRS had received an updated address from the taxpayer. According to FMS, about 580,000 advance tax refund checks had been returned as of December 31, 2001, the last date that IRS was authorized to make advance payments. About 548,000 of those checks (less then 1 percent of the advance refund checks sent) valued at about $174 million were returned undeliverable due to problems with the taxpayer’s address, according to IRS. The percentage of checks returned undeliverable (less than 1 percent) was less than the approximate 4-percent rate that an FMS official indicated was the normal rate for undeliverable tax refunds. According to an FMS official, undeliverable advance refund checks, like other tax refund checks that are returned undeliverable, were cancelled and information concerning the cancelled checks was provided to IRS. IRS was to research a taxpayer’s account to determine if there was an updated address to which another check could be sent. IRS updates taxpayer addresses each week through a National Change of Address Database maintained by the Postal Service. Taxpayers can also update their addresses with IRS by submitting an IRS Form 8822 “Change of Address.” In addition, for purposes of the advance tax refunds, IRS revised its normal procedures by authorizing its customer service representatives to accept change of address information over the telephone from taxpayers. Officials at IRS’s Philadelphia campus said that much of the written correspondence they received during the period that the advance tax refund payments were being made involved address changes from taxpayers who wanted to ensure that they would receive their checks. TIGTA found that some undeliverable advance refund checks were not reissued even though IRS had updated address information. According to TIGTA, this occurred because IRS did not program its computer system to automatically reissue undelivered refunds for all types of address changes made to taxpayers’ accounts and IRS employees did not always perform adequate research on IRS computer systems necessary to identify current addresses and reissue the refunds. TIGTA brought this to IRS’s attention, and as of late October 2001, IRS had identified about 34,000 taxpayers for whom refunds had not been reissued even though updated addresses were available. TIGTA estimated that the 34,000 refunds totaled over $10 million and had been delayed for an average of 8 weeks. According to TIGTA, in late December 2001, IRS reissued refunds for taxpayers for which IRS had a more current address. However, because this issue goes beyond the advance tax refunds and affects refunds in general, TIGTA recommended that IRS (1) revise its computer programming to automatically reissue undelivered refunds for any address changes after the refunds are initially issued and (2) eliminate the need for IRS employees to perform certain IRS computer system research to identify a more current address because the recommended programming revision would enable the computer to perform this research. IRS agreed with both recommendations and plans to initiate both the programming and procedural changes necessary to implement them. Besides having to deal with undeliverable advance refund checks, FMS also had to deal with a relatively small number of duplicate, altered, and counterfeit checks. This issue is discussed in appendix I. IRS’s telephone assistance performance measures for the first 3 months of the advance tax refund period (July through Sept.)—when notices were mailed out and checks were mailed to most taxpayers—show that taxpayers had problems reaching an IRS assistor. Overall, when compared with the same 3-month period in 2000, the accessibility of IRS’s telephone assistance generally declined. According to IRS officials, accessibility declined because the demand for assistance, driven by taxpayer questions about the advance refund, exceeded the capacity of available telephone equipment and staffing to answer the calls. However, during the last 3 months of the refund period (Oct. through Dec.), accessibility improved compared with the first 3 months of the refund period and the same 3-month period in 2000. Problems reaching an IRS assistor may have caused some taxpayers to call the Taxpayer Advocate Service with questions about the advance refund. Appendix II has information on taxpayer contacts with the Taxpayer Advocate Service concerning advance tax refunds. IRS generally projects demand for telephone assistance based on historical data. Because IRS did not have previous experience with an initiative of the type and scope of the advance tax refund to provide historical data, IRS did a speculative analysis in June 2001 to project the volume of advance refund-related calls it would receive. The analysis projected that IRS would receive 53.2 million additional calls during the advance refund period. This would have been a 275-percent increase over the 19.4 million calls IRS received in the same 6-month period in 2000 and about a 129-percent increase over the 41.2 million calls IRS received in the 2001 filing season (Jan. through July 14, 2001), which is traditionally IRS’s busiest time of the year for telephone assistance. According to the text of the analysis, the assumptions on which the analysis was based were risky, and IRS officials had limited confidence in the results. Although IRS lacked a reliable projection of advance refund-related demand for telephone assistance, IRS officials said that they expected demand to be significant based on previous general experience with refunds and changes in the tax law. However, according to IRS officials, IRS did not plan for nor expect to meet a dramatic increase in telephone assistance demand during the advance refund period, but, instead, had a two-pronged approach for responding to as much as possible of the increased demand given the telephone equipment and the staff resources that were available. The first prong of IRS’s strategy was to handle as many calls as possible through automation, thereby freeing up assistors to handle calls that required live assistance. To accomplish this, IRS publicized its TeleTax telephone number in the notices sent to taxpayers and through an announcement played on IRS’s main telephone assistance line. The TeleTax line had recorded information on the advance tax refund program and an interactive service that told the taxpayer the expected date the check would be mailed based on the last two digits of the SSN entered by the taxpayer. IRS data show that many taxpayers called for this information—from July 1 through December 31, 2001, IRS received about 36.6 million calls on TeleTax compared with 1.8 million calls received on TeleTax during the same months in 2000. The second prong of IRS’s strategy was to devote more staffing to answering refund-related calls. IRS’s forms distribution centers recalled about 450 employees from furlough and trained them to handle simpler calls related to the advance tax refund. Also, IRS devoted more staffing to its regular telephone operations compared to the previous year—during the first 3 months of the refund period, IRS expended 1,952 staff years in its toll-free telephone operation, 179 more staff years than during the same 3-month period in 2000, or about a 10-percent increase. According to IRS officials, total staffing increases do not fully reflect the extent of the staffing for answering refund-related calls because IRS directed resources from other toll-free work, such as answering calls from taxpayers about their accounts, to answer refund calls. IRS estimates that of the 1,952 total staff years expended during the first 3 months of the advance refund period, 493, or 25 percent, were expended answering advance refund- related calls. Despite IRS’s efforts to meet the increased demand for telephone assistance, taxpayers had greater difficulty in accessing that assistance during the first 3 months of the advance refund period as compared with the same time period in 2000. IRS has four measures for judging its performance in providing access to telephone assistance. As shown in table 1, during the first 3 months of the refund period—when notices were mailed out and checks were mailed to most taxpayers—IRS’s telephone performance declined for all four measures compared with the same time period in 2000. However, as also shown in table 1, performance improved during the last 3 months of the refund period (when, as discussed later, the demand for assistance decreased) and was better than in the same 3-month period in 2000. According to IRS officials, (1) a significant increase in the demand for telephone assistance caused the decline in accessibility during the first 3 months of the advance tax refund period, (2) this increase was driven by taxpayer questions about the advance tax refund, and (3) the demand for assistance exceeded IRS’s capacity for handling it given IRS’s available equipment and staff resources. As we previously reported, demand for assistance is one of the key factors that can affect level of service. As demand increases, for example, level of service would typically decline because, other factors being held constant, IRS would likely answer a lesser percentage of the calls. Table 2 has information on the demand for telephone assistance during the 6-month advance tax refund period and the same 6-month period in 2000. The table shows that the increase in demand during the first 3 months of the advance tax refund period was especially significant. Although table 2 shows that demand during the last 3 months of the advance tax refund period was higher than during the same 3 months in 2000, table 1 showed that accessibility to telephone assistance improved compared to 2000. According to IRS officials, accessibility improved despite the increase in demand because of (1) improvements in the routing of calls, (2) changes in the types of calls received, and (3) more staff time devoted to telephone assistance. The 2002 tax filing season was adversely affected by several problems related to the rate reduction credit. Most significant were the substantial number of returns filed with errors related to the credit and a degradation of telephone service in February 2002 that was likely due to an increase in demand for assistance related to the credit. Other problems were avoided when TIGTA identified computer programming errors related to the credit that IRS was able to resolve before any taxpayers were affected. However, two other issues were identified too late to avoid affecting taxpayers—one involved a programming problem that resulted in some taxpayers getting credits to which they were not entitled, the other involved an IRS policy that resulted in some taxpayers not getting credits to which they were entitled. To help ensure that taxpayers correctly dealt with the rate reduction credit on the returns they filed in 2002, IRS built checks into its computer system that enabled it to verify the amount of the rate reduction credit claimed and to adjust incorrectly claimed credit amounts accordingly. Using those checks, IRS identified a substantial number of errors related to the rate reduction credit on returns prepared both by taxpayers and paid tax return preparers. As shown in table 3, IRS had identified over 7 million individual returns with rate reduction credit errors as of May 31, 2002, which represented 57.3 percent of returns with errors and 6.5 percent of total returns processed at that time. Taxpayers and return preparers made various types of errors related to the rate reduction credit during the 2002 tax filing season. Over 4.4 million taxpayers who were entitled to a credit failed to claim the credit on their tax year 2001 return. Almost 1.8 million taxpayers who had received the maximum advance tax refund in 2001 and thus were not entitled to a credit claimed the amount of their advance refund as a credit on their 2001 return. Over 800,000 taxpayers who were entitled to and claimed a credit incorrectly computed the amount to which they were entitled. Once IRS recognized that taxpayers and return preparers were having problems related to the rate reduction credit, it took immediate action. For example, as early as January 23, 2002, IRS posted information to its Web site and issued news releases informing the public that many early tax returns were being filed with rate reduction credit errors. In addition, IRS provided clarifying information to preparers who file returns electronically and, around the beginning of February, began rejecting electronic submissions that involved certain types of errors related to the credit. By rejecting these submissions, IRS required the taxpayer or return preparer to correct the error before IRS would accept the electronic return for processing. This is consistent with IRS’s traditional practice of rejecting electronic submissions that contain other errors, such as incorrect SSNs. As of July 1, 2002, IRS had rejected over 300,000 electronic submissions with rate reduction credit errors. Despite IRS’s efforts, the rate at which filed returns included errors related to the rate reduction credit did not drop significantly during the filing season. As of March 15, 2002, 6.8 percent of all returns filed included a rate reduction credit error; as of May 31, 2002, the error rate was 6.5 percent. IRS data suggest that demand for telephone assistance related to the rate reduction credit was significant during the 2002 filing season and that the demand negatively affected telephone level of service, especially in mid- to late-February when the greatest number of taxpayers called with questions about this credit. As discussed earlier, the amount of demand for assistance is one of the key factors that can affect level of service, with an increase in demand being associated with a decrease in the level of service because, other factors being held constant, IRS would answer a lesser percentage of the calls. The average time it takes assistors to handle calls is another factor that affects level of service, with a higher average handle time being associated with a decrease in level of service. According to IRS officials, as the filing season progressed, demand for assistance related to the rate reduction credit increased significantly and unexpectedly, causing the level of service to decline. Officials said that taxpayer access to service began declining in early February as taxpayers called in response to notices IRS mailed them because of errors on their returns related to the credit. In that regard, data provided by IRS showed that taxpayers made about 1.5 million calls to IRS's accounts assistance telephone number during the 3 weeks ending March 2, 2002, compared with about 0.8 million such calls during the same 3-week period in 2001. Because these account- type calls take longer to handle, on average, than other types of calls, this increase in account-related demand increased the average handle time and lowered the level of service. Figure 1 shows that customer service representative (CSR) level of service during the first 6 weeks of the 2002 filing season was significantly better than or about the same as the level of service during the first 6 weeks of the 2001 filing season but was significantly worse during the next 3 weeks (the 3 weeks ending March 2). In the remaining weeks of the filing season, CSR level of service returned to levels comparable to 2001 performance. The performance dip coincides with other data that indicate there was likely a significant increase in demand related to the rate reduction credit. For example, previous IRS studies have shown a strong relationship between the volume of certain types of notices IRS mails to taxpayers and the demand for telephone assistance. One such notice is the CP-12, which IRS sends to taxpayers notifying them of a math error on a return. The notice gives the taxpayer information about the error and includes one of IRS’s main toll-free telephone numbers for the taxpayer to call for further information. According to IRS data, for the 4 weeks beginning February 3, 2002, IRS mailed over 1.6 million of these notices—about 5 times the number mailed over the same period in 2001. According to IRS officials, the bulk of this increase was due to taxpayer errors in completing the rate reduction credit line of the tax return. Unlike the advance tax refund period in which IRS’s plans for handling the increased telephone assistance demand included both automation and increased staffing, IRS’s plans for handling the additional demand in the 2002 tax filing season focused on automated assistance. IRS implemented an automated interactive telephone application that provided callers with the amount of their advance refund based on the SSN and personal identification data the caller input. According to IRS officials, when IRS began planning for staffing for the 2002 filing season—around June 2001— the potential effect of the rate reduction credit on the filing season was unknown. Officials said that although IRS planned for some increased staff time to handle potential demand to be generated by the rate reduction credit, the plans did not anticipate the level of demand that was caused by the error notices. TIGTA identified and IRS corrected two problems related to the rate reduction credit that could have resulted in (1) taxpayers receiving a rate reduction credit to which they were not entitled or (2) taxpayers receiving erroneous information via IRS’s TeleTax number concerning whether or not they received an advance tax refund. In addition, IRS identified another problem that may have resulted in as many as 15,000 taxpayers receiving as much as $4.5 million in erroneous refunds due to rate reduction credits to which they were not entitled. One problem identified by TIGTA involved the lack of advance tax refund data in IRS’s National Account Profile (NAP) for certain taxpayers. The taxpayers involved were those who had filed joint returns for tax year 2000 with a deceased spouse on which the deceased spouse was the primary taxpayer and the surviving spouse was the secondary taxpayer.In those cases, because of a computer programming oversight, no advance refund amount was placed on the surviving spouse’s NAP account. Thus, if the surviving spouse filed a tax year 2001 return and correctly claimed no rate reduction credit, IRS’s records would have erroneously indicated that the taxpayer had not received an advance refund, and IRS would have adjusted the taxpayer’s return to include a credit. According to TIGTA, IRS corrected this problem by January 11, 2002, thus preventing about 217,000 taxpayers from receiving up to $50 million in rate reduction credits to which they were not entitled. Another problem identified by TIGTA, which it attributed to a misinterpretation of programming requirements, involved IRS’s failure to add information in the NAP for taxpayers who did not receive an advance tax refund. As a result, if these taxpayers called IRS’s automated telephone system, they would have been told that no information was available regarding their advance tax refund, rather than being told that they did not receive an advance tax refund. According to TIGTA, it notified IRS of this problem, which could have affected as many as 35 million taxpayers, on January 8, 2002, and IRS made the necessary corrections by January 15. During the filing season, IRS’s computer system was generating rate reduction credits for some taxpayers who had already received the maximum advance tax refund. This occurred when a taxpayer received an advance tax refund based on a tax year 2000 return on which the taxpayer used a taxpayer identification number other than an SSN, such as an Individual Taxpayer Identification Number (ITIN), and subsequently filed a tax year 2001 return using an SSN. Because IRS’s records showed no advance tax refund associated with the SSN the taxpayer used on the 2001 return, IRS’s computer system indicated that the taxpayer was entitled to a rate reduction credit on the 2001 return but had failed to claim it. Thus, the computer automatically generated a rate reduction credit for the taxpayer. According to IRS officials, this problem was brought to IRS’s attention by IRS field staff. They estimated that it would affect no more than about 15,000 taxpayers, who in a worst-case scenario may have received an additional $300 credit, for a total of $4.5 million in potentially erroneous rate reduction credits. According to the officials, IRS will not attempt to recover any erroneous payments resulting from this problem because it would not be cost-effective to do so. The officials noted, among other things, that since IRS , not the taxpayers, was at fault, IRS would have to attempt to recover the erroneous payments through civil court rather than tax court. According to IRS officials, an additional rate reduction credit will be allowed to as many as 2.5 million taxpayers. When IRS originally reviewed these taxpayers’ returns, it was determined that the taxpayers had underclaimed the amount of their rate reduction credits. However, IRS did not correct the errors because the amount underclaimed was less than a specified amount. It is IRS’s policy to not make a tax change based on correction of a credit either in favor of the taxpayer or the government if the tax change is less than a specified amount. According to IRS, this policy applies to all credits, not just the rate reduction credit, and was implemented both for budgetary reasons and to ensure timely return processing. IRS had originally decided to follow this policy with respect to the rate reduction credit. However, during its review of the 2002 filing season, TIGTA pointed out that the policy seemed inequitable because the specified amount below which IRS did not issue an advance tax refund was substantially less than the specified amount below which IRS did not correct an underclaimed rate reduction credit. As a result, some taxpayers could receive a small advance tax refund, while other taxpayers could not receive a similar small refund based on an underclaimed rate reduction credit on their 2001 tax return. IRS subsequently stated that it would allow the underclaimed credit and any interest to the affected taxpayers by the end of calendar year 2002. On the basis of our work and TIGTA’s, we had some observations that IRS may find useful if faced with similar challenges in the future. For example, several problems related to the advance tax refunds and the rate reduction credit were avoided because TIGTA identified and quickly notified IRS of programming errors. While this may indicate deficiencies in IRS’s process for testing program changes, the work needed to make that determination was beyond the scope of this report. What TIGTA’s findings do indicate is the value of enlisting the assistance of an outside party, such as TIGTA, to review the programming for a major unplanned effort that has to be implemented in a short period of time. This would provide an independent review with an eye toward identifying any potential problems that could either negatively impact taxpayers or create unnecessary work for IRS. “This year’s main problem is the Rate Reduction Credit, where multiple terms (‘rebates,’ ‘advance payments,’ ‘refund advances,’ ‘rate reduction credits’) and instructions that confused taxpayers and even tax preparers resulted in many rejected and over three million errors.” Besides the confusing terminology mentioned by H&R Block, we identified the following three aspects of IRS’s instructions that, in retrospect, could have been clearer: Because taxpayers might use their prior year’s tax return as a guide in preparing their current year’s return, it is important that changes to the tax form, such as the new line added for the rate reduction credit, be clearly highlighted. Although IRS mentioned the new credit on the front page of the tax form instructions, it was done in a way that we believe could easily be missed (see app. III for a copy of the front page with our highlighting of the language in question). What IRS emphasized was the fact that tax rates had changed, which is not information that taxpayers need to know in completing their returns since the effect of that change happens automatically when they use the tax table or tax rate schedules to compute their taxes. What was more important to emphasize was the fact that there was a new credit on the tax return that they might be eligible for. The tax form instructions indicate that if a taxpayer received “before offset” an advance tax refund of either $600, $500, or $300 based on his or her filing status, the taxpayer would not be entitled to a rate reduction credit. There is no further explanation in the instructions of the meaning of “before offset,” a term that may not have been clear to all taxpayers. The instructions might have been clearer if IRS had included the explanation provided on its Web site. On the Web site, IRS explains that if taxpayers had their advance tax refund “offset” to pay back taxes, other government debts, or past due child support, they could not claim the rate reduction credit for the amount that was offset. Because many taxpayers may not have had access to the Web site or known of the extent of rate reduction credit information available, it would have been advantageous to include a similar explanation in the instructions. To its credit, IRS, expecting that many taxpayers would not remember the amount of their advance tax refund, established an interactive application as part of TeleTax that taxpayers could use to find out the amount of their advance. Although the instructions for Forms 1040, 1040A, and 1040EZ included reference to the interactive application, the reference was in a general section of the instructions dealing with TeleTax rather than in the section of the instructions dealing with the rate reduction credit (see app. III for copies of the relevant pages in the Form 1040 instructions with our highlighting of the relevant data on those pages). In addition to clearer instructions, an IRS official indicated that IRS could have done a better job of getting information on the rate reduction credit to the public before the filing season. In that regard, IRS had decided not to send notices to taxpayers reminding them of the amount of the advance tax refund they received due to cost considerations. Although sending such a notice would have resulted in additional cost, it may have lessened the confusion experienced by taxpayers when it came time to determine whether or not they qualified for the rate reduction credit and subsequently may have reduced the costs IRS incurred to identify and correct rate reduction credit errors, send error notices to the affected taxpayers, and provide related telephone assistance. A final issue with respect to guidance for the future is IRS’s lack of plans to do a “lessons learned” review of the advance tax refund initiative. According to IRS officials, other than a critique of the filing season, which is done annually and will no doubt include a review of problems that taxpayers experienced with the rate reduction credit, IRS has no plans to conduct a comprehensive analysis of the advance tax refund initiative. Both the Government Performance and Results Act of 1993 and IRS guidance stress that analysis is a key part of understanding performance and identifying improvement options. Such an analysis would benefit IRS management if it became necessary to deal with similar challenges in the future. IRS and FMS should be commended for the extensive amount of work they accomplished in a short period of time to issue 86 million advance tax refund checks. While there are bound to be implementation issues in any effort of this magnitude, IRS responded to problems quickly so that only a small percentage of advance refund checks were affected. Although taxpayers experienced problems in reaching IRS by telephone, IRS probably did as good as it could considering the increased demand for assistance, the number of staff available, and the fact that the advance tax refund was a one-time event that made it unrealistic to hire and train additional staff. Although this report discusses various implementation issues related to the advance tax refund program and the rate reduction credit, we are not recommending any specific corrective actions related to those issues. Because our review focused on the advance tax refund program, we have no basis for knowing whether the identified issues were unique to that program or more widespread and, not knowing that, we have no basis for recommending specific changes to IRS’s policies or procedures. The various observations we identified in the prior section of this report should be useful to IRS if faced with similar challenges in the future. However, IRS staff who were involved in planning and implementing the advance tax refund program, including those aspects related to the rate reduction credit, are in an even better position than either us or TIGTA to assess IRS’s performance and suggest alternative approaches for handling the challenges involved in such an effort. That kind of in-house assessment, while including the results of work done by us and TIGTA, could delve into details that we did not, such as IRS’s testing of programming changes and its decision to not send notices to taxpayers reminding them of the amount of advance tax refund they received. To help identify the full range of challenges IRS faced with respect to the advance tax refunds and rate reduction credit and any changes in procedures or processes that might be warranted if it faced similar challenges in the future, we recommend that the Commissioner of Internal Revenue convene a study group to assess IRS’s performance with respect to the advance tax refunds and rate reduction credit. That assessment should include the results of work done by us and TIGTA, including the various observations identified in this report. To ensure that managers faced with similar challenges in the future have the benefit of this assessment, the results should be thoroughly documented. We requested comments on a draft of this report from IRS and FMS. We obtained written comments in a July 26, 2002, letter from the Commissioner of Internal Revenue (see app. IV) and in a July 24, 2002, letter from the Commissioner of FMS (see app. V). The Commissioner of Internal Revenue said that the report “is an accurate and balanced reflection of our efforts in administering the new law” and that the “documentation provided by your report, and a series of reports by TIGTA, are a strong foundation for an assessment of lessons learned.” The Commissioner said that he asked the Commissioner of IRS's Wage and Investment Division to conduct a brief review to identify lessons learned and that “his synopsis will supplement the documentation already available and serve as a historic reference for future guidance.” Such a review and documentation would be responsive to our recommendation. The Commissioner of FMS expressed the belief that the advance tax refund program was a model initiative that demonstrated that federal agencies are capable of implementing major programs on short notice efficiently and cost-effectively. We are sending copies of this report to the Chairman and Ranking Minority Member of the House Committee on Ways and Means. We are also sending copies to the Secretary of the Treasury; the Commissioner of Internal Revenue; the Commissioner of FMS; the Director, Office of Management and Budget; and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. This report was prepared under the direction of David J. Attianese, Assistant Director. If you have any questions regarding this report, please contact Mr. Attianese or me at (202) 512-9110. Key contributors to this report were Robert C. McKay and Ronald W. Jones. Once the advance tax refunds were issued between July and December 2001, there were some problems identified involving duplicate, altered, and counterfeit checks. However, in light of the overall number of advance refund checks issued—about 86 million—these problems were relatively minor. One problem identified within the first 2 weeks of the advance tax refund payment period and promptly corrected involved duplicate checks sent to taxpayers by one of the three Defense Finance and Accounting Service (DFAS) centers that assisted the Financial Management Service (FMS) in issuing the checks. This problem surfaced because two taxpayers who had received duplicate checks tried to cash the second check and a third taxpayer notified the Internal Revenue Service (IRS) about receiving duplicate checks. Once the problem was identified, FMS decided to no longer use the particular DFAS center from which the duplicate checks had emanated. As of May 2002, FMS had identified 27 instances of such duplicate checks. According to an FMS official, of the 27 taxpayers who received duplicate checks, 24 taxpayers have either fully repaid the extra payment or have returned the duplicate check. As of May 1, 2002, 1 taxpayer had partially repaid the extra check, and FMS was in the process of recovering the duplicate payments from the other 2 taxpayers. Another problem related to the advance tax refunds involved either altered or counterfeit checks. FMS’s Check Reconciliation Branch detects altered and counterfeit checks during routine reconciliation of agency payment records with bank records. Other altered or counterfeit checks may be identified by banks when the checks are presented to be cashed. According to FMS, as of March 1, 2002, there were 165 advance refund checks that were found to be either altered or counterfeit as follows: 47 altered checks with a combined value of $138,405 and 118 counterfeit checks with a combined value of $75,640. According to FMS, 162 of the 165 altered and counterfeit checks were referred to the United States Secret Service for investigation. We do not know the results of any such investigations. The Taxpayer Advocate Service (TAS), which helps taxpayers solve problems with the Internal revenue Service (IRS) and recommends changes that will prevent them, was involved in the advance tax refund program since its inception. Originally, when asked to comment on the notices that were to go out to taxpayers, TAS offered suggestions to improve notice clarity. Once the notices and refund checks were sent out, TAS handled telephone calls and correspondence from taxpayers and their congressional representatives concerning the refunds and the related rate reduction credit on the 2001 tax return. In some instances, TAS opened cases to address the taxpayers’ concerns. However, before sending out the initial advance tax refund checks, IRS decided, after inquiry from TAS, that no checks would be sent to taxpayers ahead of their scheduled delivery date, even for cases involving potential hardship. Although TAS had no nationwide data concerning the number of taxpayer calls related to the advance tax refunds, such data was tracked by eight local TAS offices, as well as by TAS offices in 5 of 10 IRS campuses. Through April 2002, these offices collectively had received over 4,200 calls from taxpayers related to the advance tax refunds. According to TAS officials, the most frequent questions asked by taxpayers who made these calls were as follows: When will I receive the advance payment? Am I eligible for an advance payment? What amount of advance payment will I receive? Why am I not going to receive an advance payment? Was this an advance payment or a rebate of tax already paid? Various questions concerning offsets for back taxes and child support, innocent spouse, etc. According to TAS staff at one campus, some taxpayers told them that they had called TAS with these types of questions because they had difficulty reaching IRS’s regular telephone assistors. According to TAS national office staff, as of May 2002, TAS had opened a total of 3,246 cases nationwide related to the advance tax refunds and the subsequent rate reduction credit, about 500 of which had been opened since the start of calendar year 2002. Of the total cases opened, about 2,170 (67 percent) involved congressional contacts, which automatically result in the opening of a TAS case. Other cases that were opened involved either an injured spouse or a potential hardship situation. In reviewing TAS cases at one of the IRS campuses, we found that taxpayers who had either no taxable income in 2000 or not enough taxable income to make them eligible for the full amount of the advance tax refund authorized by law were questioning why they had received no advance refund or less than the full amount. Other taxpayers had contacted TAS to provide a change of address or to check on the status of their advance refund check.
The Economic Growth and Tax Relief Reconciliation Act of 2001 replaced the 15-percent tax rate for individual taxpayers with a 10-percent rate. To stimulate the economy quickly, the act provided for an advance refund in 2001. Between July and December 2001, the Internal Revenue Service (IRS), working with the Department of the Treasury's Financial Management Service (FMS), mailed out 86 million advance refund checks totaling $36.4 billion. IRS spent $104 million to run the advance tax refund program, and FMS spent $34 million to issue checks; IRS expects to spend another $12 million during fiscal year 2002. Overall, GAO found that IRS and FMS did a good job carrying out the program. However, the advance refunds and related rate reduction led to increased errors during the 2002 tax-filing season because of taxpayer confusion about the tax credit. In GAO's view, an independent review of the computer programming used to carry out a major effort such as the advance tax refund program might help avoid future problems. At the same time, clearer tax return instructions might reduce the number of returns filed in error.
Security force assistance—the effort to develop capable host nation security forces—is a key component of the U.S. efforts to create sustainable security in both Iraq and Afghanistan. The goal of this mission is to build partner capability and improve the security situation such that, over time, U.S. forces and partnered foreign security forces can collectively set the conditions to defeat common threats and ultimately achieve strategic success. The Army’s field manual on security force assistance recognizes that this is not a new mission but also states that in the current operational environment, security force assistance is no longer an additional duty but is now a core competency of the Army. It is part of the full spectrum of military operations, meaning it can be conducted across the spectrum of conflict, from stable peace to general war. The field manual also notes that security force assistance can include both advising and partnering to develop competent and capable foreign security forces.  Advising. Advising is the primary type of security force assistance and is the use of influence to teach, coach, and advise while working by, with, and through the foreign security force. Advising helps foreign security forces conduct independent decision making and operations, and advisors may also provide foreign security forces with direct access to joint and multinational capabilities, such as air support, artillery, medical evacuation, and intelligence.  Partnering. In partnering, the U.S. attaches units to host nation units at various levels in order to leverage the strengths of both U.S. and foreign security forces. Partnered units should establish combined cells for intelligence, operations, planning, and sustainment. While effective coordination is always required and initial efforts may require completely fused efforts, foreign security forces should eventually build the capability and capacity to conduct all efforts autonomously. Advising and partnering, while complementary, are distinct activities that can be performed simultaneously, sequentially, or in combination. U.S. units, such as Army BCTs, are partnering with the Iraqi and Afghan security forces. Examples include U.S. battalions conducting combined route clearance missions or manning combined checkpoints with host nation military units in their area of operations. The Army’s field manual notes that as a foreign security force’s capabilities mature, the echelon and degree of partnering decrease. For example, a U.S. Army battalion may initially partner with a foreign security force battalion, but as the foreign security force matures, a U.S. Army battalion may partner at a higher echelon such as with a foreign security force division while the U.S. battalion’s subordinate companies may partner with the foreign security force battalions. Like partnering, advising also can occur at various echelons of the foreign security force with the echelon of focus changing as foreign security forces mature. However, brigades have only recently assumed the advising mission in Iraq and Afghanistan. Specifically, prior to 2009 and 2010, respectively, the advising mission was conducted primarily with transition teams. These transition teams did not exist as units in any of the services’ force structures and were instead comprised of company- and field-grade officers and senior non-commissioned officers who were centrally identified and individually selected based on rank and specialty. For the Army alone, the number of individually sourced advisors— individually sourced advisors are those identified by Army Human Resources Command and assigned to transition teams—required to fill the transition teams in Iraq and Afghanistan at any one time totaled about 8,000 personnel. As we have previously reported, the demand for these leaders created challenges for the services because the leaders were generally pulled from other units or commands, which then were left to perform their missions while undermanned. In addition, the transition teams operated externally to the major combat units in their area of operations and reported to a different command structure, which led to a lack of unity of command that complicated coordination and communication between the transition teams and the combat units. The Army developed the concept of augmenting BCTs with specialized personnel to execute the advising mission, in part, as a means of alleviating these challenges. The replacement of transition teams with augmented BCTs was intended to mitigate strain on the Army by reducing the number of personnel who would have to be individually sourced by the Army Human Resources Command for the security force assistance advising mission, since the advisors would be able to leverage the capabilities of the existing BCTs for support functions, thus requiring fewer specially sourced individuals for the mission. Augmented BCTs also were intended to improve command and control over the mission by placing both the mission and personnel assigned to the mission under a single brigade commander. 6GAO, Iraq and Afghanistan: Availability of Forces, Equipment, and Infrastructure Should Be Considered in Developing U.S. Strategy and Plans, GAO-09-380T (Washington, D.C.: February 12, 2009). capabilities that should be considered when defining augment requirements, and the need to support advisor personnel with resources from the BCT.  Task organization: The BCT commander organizes the advisor augment personnel into advisor teams based on advising mission requirements in his area of operations. These advisor teams may be formed from organic resources from the brigade, external augmentation, or a combination of these.  Command and control: The BCT commander has command and control authority over the advisor personnel and advisor teams. The Army handbook notes the advantage of the advisor teams being under the command of the augmented BCT commander, with this unity of command resulting in a unity of effort and purpose.  Augmentation requirements: The field manual provides a basic conceptual design for augmentation, which can include personnel capabilities such as combat advisors, military police, or legal personnel. According to the field manual, the theater commander is to determine the precise mix of forces and augment capabilities— including the specific numbers and types of advisors—required for augmented BCTs in his area of operations, based upon the operational environment and mission requirements. As advising tasks change in response to the evolving needs of the host nation security force, the theater commander can re-tailor the augmentation (i.e., the specific numbers and types of advisors) provided to successive BCTs, accordingly.  BCT support of advisors: The field manual notes that the advisor teams may need resources from the brigade for support functions, such as specialized personnel, equipment, transportation, and security. This would allow the advisor teams to stay focused on advising. The handbook acknowledges, though, that the brigades may have other mission priorities in addition to security force assistance. Although the augmented BCTs are specially resourced with advisor personnel to advise, assist, and mentor the Iraqi and Afghan security forces, the brigades still must balance the security force assistance advising mission with other brigade missions. The security force assistance field manual also addresses the training that should be received by soldiers assigned to security force assistance missions. The Army has tasked the 162nd Infantry Training Brigade to provide advisor augment personnel with specialized advisor training on topics such as language and culture, host nation government and security forces, cross-cultural communication, and rapport building as part of their pre-deployment training. The program also includes leadership engagement scenarios where advisor team leaders engage with role players in simulated exchanges and opportunities for the advisors and brigade and battalion leadership to conduct combined planning exercises with simulated host nation security force leadership. The final stage of pre-deployment training for the augmented BCT is the mission rehearsal exercise, through which the advisor personnel and the BCT are expected to exercise the augmented BCT concept as an integrated unit. In addition to participating in combat and advising mission exercises, the scenario is intended to enable the BCT and its advisors the opportunity create advisor teams and establish the key command and control and support structures necessary for executing the mission in theater. The Army has deployed augmented BCTs in response to theater commanders’ requests; however, these units have faced challenges because theater commanders’ guidance did not always clearly define how these units were to perform key aspects of the augmented BCT concept and theater commanders’ requests did not include some requirements needed to support the advising mission, given the brigades’ resource limitations. As a result, brigade commanders have faced challenges determining how to prioritize their resources when supporting multiple missions in addition to the advising mission and providing specialized personnel, equipment, transportation, and security for the advisors. In addition, augmented BCTs and their assigned advisor personnel have sometimes lacked the unity of command envisioned under the Army’s augmented BCT concept. In 2009 and 2010, U.S. Central Command, on behalf of theater commanders in Iraq and Afghanistan, submitted requests for augmented BCTs for ongoing operations. In May 2009, the theater commander for Iraq requested forces for the augmentation of Iraq-bound BCTs with 48 field grade officers specially trained as advisors to execute the security force assistance advising mission. Likewise, in March 2010, the theater commander for Afghanistan submitted a request for forces for augmented BCTs that would each be augmented with a package of 48 advisor personnel—24 field grade officers and 24 non-commissioned officers. Both requests envisioned that the 48 advisor personnel would be organized into 24 two-man advisor teams and that the teams would receive all necessary support—including additional specialized personnel, equipment, and transportation and security support—from the brigades. The Army has been able to deploy augmented brigades to Iraq and Afghanistan since August 2009 and June 2010, respectively, in accordance with theater commanders’ requests. As of June 2011, there were six augmented BCTs operating in Iraq and nine in Afghanistan. The Army intends for all future BCTs deploying to Afghanistan to be augmented BCTs. Augmented BCTs have faced challenges allocating resources across missions and providing support to enable the advising mission because theater commanders did not always set clear priorities, ultimately leading to challenges for these units. Specifically, augmented BCTs have sometimes had difficulty allocating resources between the advising mission and other missions, such as counterinsurgency operations; advisor teams have sometimes lacked the appropriate specialized personnel and equipment to conduct the advising mission; and advisor teams have not always received consistent transportation and security support from augmented BCTs to enable the advising mission. Each of these challenges is discussed below. Army guidance for security force assistance recognizes that augmented BCT commanders consider the extent of threats, combined with resource limitations, in order to set priorities, which would include determining the degree to which BCT resources can be allocated to support the advising mission. For example, augmented BCTs in Iraq and Afghanistan must balance their requirements to support the advising mission with other operational requirements, such as counterinsurgency operations, partnering with host nation security forces, or performing missions such as conducting checkpoints. Army officials told us that, in the absence of other guidance from theater commanders, in kinetic combat environments, such as Afghanistan, augmented BCT commanders naturally prioritize the combat mission and direct their resources that way. According to Army officials, the augmented BCT concept was initially intended to be introduced to an operating environment after major combat operations were concluded. This would make more of the resources of the augmented BCTs available to support the advising mission. When augmented BCTs first deployed to Iraq in 2009, the Iraqi Security Forces were assuming greater responsibility for combat operations and Iraqi forces have had the primary responsibility for security since 2010. Iraq theater command officials told us that advising the Iraqi Security Forces is the primary effort of U.S. military forces in Iraq, including augmented BCTs. In contrast, U.S. military forces in Afghanistan are still conducting counterinsurgency operations in a combat environment and the theater commander in Afghanistan has not specified the priority of the advising mission for the augmented BCTs, relative to counterinsurgency operations. The Afghanistan theater commander’s request for augmented BCTs noted that these BCTs would be responsible for both advising and counterinsurgency operations, but provided no guidance as to how the brigades should balance resources and make trade-offs between the two different mission sets. Augmented BCTs in both theaters, though, had challenges balancing resources between the advising mission and other missions. The theater commanders’ requests for both Iraq and Afghanistan envisioned the BCTs executing the advising mission by organizing their advisors into 24 two-man teams drawing additional support from the BCT. According to officials from several of these augmented BCTs, though, the brigades do not have enough organic resources to support 24 dispersed teams while still preserving enough of their resources to conduct other missions. For instance, officials from one augmented Stryker brigade— Stryker brigades are significantly larger than other brigades—told us that the brigade could only organize into a maximum of 12 to 15 dispersed advisor teams using a company as the basis for support while still addressing other mission requirements. Given their resource limitations and the need to carry out other missions, augmented BCT officials told us that they organized their advisors into a smaller number of teams often consisting of more than two advisors. For example, In Iraq, one augmented BCT that deployed with 43 advisors organized them into five different advisor teams, while another augmented BCT organized its 46 advisors into eight teams. In Afghanistan, one augmented BCT organized the 44 advisors that it deployed with into 15 teams, while another augmented BCT organized its 48 advisors into nine advisor teams. According to some of these officials, organizing the advisors in this manner was intended to enable the brigade to better support the advising mission while still retaining the capacity to meet other mission requirements. However, we found that some of the augmented BCTs that we visited faced challenges supporting their advisor teams, regardless of the number of teams they had. The Army’s augmented BCT concept and the theater commanders’ augmented BCT requests assumed that any specialty personnel required by the advisor teams—such as logisticians and intelligence personnel— would be pulled from the brigade. The theater commanders’ requests for advisors therefore do not include requirements for the advisors to have any specialized capabilities, despite the fact that advisors are frequently advising Iraqi and Afghan security forces in specialized areas. In contrast, the transition teams were often comprised of personnel with specialist capabilities in areas such as intelligence, logistics, or communications. According to the security force assistance field manual, the composition of the advisor teams is subject to objectives (e.g., the type of training to be provided) and conditions (e.g., the security environment), and BCT commanders tailor advisor teams to match those objectives and conditions. For example, the BCT commander, in coordination with the advisor personnel, could identify specialized personnel from the BCT who would be assigned to support the advisors. Because such personnel are also in high demand within the brigade, though, the brigade is expected to make trade-offs and prioritize its missions, including the advising mission. However, in the absence of advisor teams receiving specialized personnel from the brigade or the advisors themselves being specialists, some advising teams lacked specialized capabilities. For example, some advising teams told us that they were limited in their ability to advise in certain specialty areas and that advisor personnel may be advising Iraqi and Afghan leadership in functional areas where they have little or no experience. In one case, a field grade officer advisor in Iraq who had no prior intelligence experience was tasked with helping the Iraqis set up an intelligence fusion center. Since advisor teams are not regularly receiving specialized personnel from the brigades, Army and augmented BCT officials told us that including advisors with specialty capabilities as part of the augmented BCT advisor requirements would be very beneficial for the advising mission. The Army has gathered feedback from nine augmented BCT commanders and the 162nd Infantry Training Brigade, among others, that identified the need for logisticians to be a part of the advisor packages. The Army’s feedback also identified the need for military police, military intelligence, and other specialties in augmentation packages. In order to mitigate the challenges that the augmented BCTs face with shortages of specialist personnel, the Army currently has an effort underway to examine the advisor requirements and determine the need to tailor them to include more specialized capabilities. The results of this effort have not been finalized, though, so its impact cannot yet be determined. The theater commanders’ requests for the augmented BCTs assumed that the advisors would get all of their equipment from the BCTs. As was the case with specialized personnel, the theater commanders’ requests did not establish specific advisor equipment requirements for the Army to fill, with the exception of some individual weapons and other small items. As a result, some augmented BCTs experienced challenges providing personal and operational equipment to the advisors both prior to and after deploying to theater since all advisor equipment had to come from the brigades’ existing stocks. For example, augmented BCT and advisor officials told us that, prior to deploying, the advisors joining the brigades expected to have equipment such as personal computers with both unclassified and classified capabilities as well as office space to work from, but that some of the brigades had difficulties providing these things without limiting the access of others in the brigade. Theater command and augmented BCT officials told us that, once in theater, advisors sometimes lacked personal equipment, such as navigation equipment, personnel locators, and cell phones. Additionally, augmented BCTs sometimes lacked the operational equipment necessary to support advisor teams at dispersed locations. Iraq theater command officials told us that some augmented BCTs had submitted requests for additional communications equipment to support advisor teams at dispersed locations because the brigades did not deploy with the number of communications systems necessary to support all of the advisor teams that needed to operate separately from the brigade. In instances where additional operational equipment for advisors was not available, equipment shortages for advisors could impact the way that brigades organized for the advising mission. For example, officials from one augmented BCT in Iraq told us that the brigade only had seven command and control communications nodes, which limited the number of dispersed locations where the brigade could operate. While the brigade mitigated that limitation as much as possible by co-locating units and advisor teams, the shortage of key communications equipment, in part, limited the brigade’s ability to support a larger number of advisor teams. The theater commanders’ requests for the augmented BCTs envisioned that the advisor teams would get their required support from the brigades to which they were attached, but did not define the minimum level of support that the brigades were to provide to the advisor teams. Augmented BCT officials and advisors told us that the augmented BCTs are responsible for making determinations regarding the allocation of support to the advisor teams, balancing those needs against the needs of other missions. According to augmented BCT officials, advisor teams often operate away from larger combat units or established bases and could therefore require up to a platoon or company of soldiers for support. In the absence of guidance on the level of support that the augmented BCTs were to provide, the level of support that the augmented BCTs we visited provided to their advisor teams varied, depending on the operating environment and the priorities of the BCT commander. For example,  Officials from an augmented BCT that had redeployed from Iraq told us that, once in theater, the BCT received a requirement to secure a number of joint checkpoints with the Iraqi Security Forces, which limited its ability to provide transportation and security assets to the number of advisor teams that it had initially planned to support.  Advisors from an augmented BCT in Afghanistan told us that the advising mission was a low priority for the brigade and that the brigade and its battalions had too many other requirements to provide support to the advisor teams. Instead, the advisor teams relied on nondedicated support from a separate military police company operating in the area.  Advisors from an augmented BCT in Afghanistan told us that there was no official allocation of support resources within the brigade and, in some cases, the support was haphazard and came from other units outside the brigade. Transportation and security support is considered to be critical for the augmented BCT advisors’ ability to execute the advising mission. Some advisors told us that the level of dedicated transportation and security support they received from the brigade directly impacted their ability to meet with host nation security forces in order to build relationships and advise the host nation security forces. Augmented BCTs and their advisor personnel sometimes lacked the unity of command envisioned under the Army’s augmented BCT concept because theater commanders did not always provide clear guidance on command and control structures for the advisors. As a result, in some cases, advisors were reassigned to be under the control of a division or a brigade other than the one that they trained and deployed with. According to Army guidance on security force assistance, advisor teams require a clearly defined and structured chain of command under which to operate, which alleviates confusion regarding who tasks or monitors the teams’ progress and ensures that advisor teams are supported. The Army augmented BCT concept envisions the advisor teams being under the command of the augmented BCT commander, with this unity of command facilitating the integration of all aspects of the augmented BCT mission. This was intended to address a challenge with the prior transition teams, which operated independently from major combat units and were overseen by higher headquarters at the division or theater level. Iraq theater command, Army, and augmented BCT officials told us that the unity of command is one of the primary benefits of the augmented BCT concept. The theater commander’s request for augmented BCTs for Iraq included direction on the intended command and control structure of the advisors, but the request for augmented BCTs for Afghanistan did not address this topic. Although the operational commander on the ground may tailor the force as deemed necessary to meet mission requirements—including changing command and control structures—the successful implementation of the augmented BCT concept hinges significantly on leveraging the resources of the BCT to support the advisors and synchronizing the advise and assist mission as part of the overall mission of the BCT. In addition, augmented BCTs we met with in both Iraq and Afghanistan had planned and trained for their advising mission consistent with the intention that advisors will act as a synchronized force with established support and command and control relationships and with the advisor teams being a part of the BCT. For example, advisors and officials at the 162nd Infantry Training Brigade told us that augmented BCT and advisor training focuses on the advisor role as being part of the BCT. Augmented BCT officials also told us that their final mission rehearsal exercises typically included scenarios that allowed the BCT, including advisors, to exercise their support and command and control relationships. Absent guidance from theater commanders on advisor command and control, we found several instances, particularly in Afghanistan, where advisor personnel were diverted away from the augmented BCT with which they had deployed. In such instances, division commanders assumed control of the advisor teams and managed them as a division resource, similar to how the prior transition teams were managed. Those advisor teams were sometimes tasked for other advising missions not linked to the augmented BCT to which they were initially attached, or for other assignments, such as serving on division headquarters staff. For example, in the operating area of one division in Afghanistan,  The division commander assumed control of all 48 advisors from a National Guard augmented BCT and created three division level teams, each focused on different areas of the security force assistance mission. That National Guard BCT was then assigned advisor teams from another augmented BCT and the National Guard also provided additional field grade officers to allow the BCT to meet advising requirements in its area of operations, since it had lost its original advisor personnel.  The division commander tasked a five-man advisor team from one of the augmented BCTs to mentor the brigade of a North Atlantic Treaty Organization partner and some individual advisor personnel to serve as liaisons to the division. Changes to the established command relationships between the brigades and advisors after the units deploy can cause a range of challenges for augmented BCTs and advisors. These include questions about how or if the advisors’ mission continues to fit with their parent augmented BCT; how or if the advisors will continue to be supported by their parent augmented BCT, particularly if the advisors and the BCT are operating in different areas; and what the chain of command is for the advisors. Advisor requirements for augmented BCTs have decreased the total number of individually sourced advisor personnel required for the advising mission, but have increased Army personnel requirements for field grade officers, already in short supply. According to Army officials, as a result of field grade officer shortages, the Army has faced challenges meeting the requirement to provide field grade advisors to the augmented BCTs at least 45 days prior to the brigades’ mission rehearsal exercise. Since augmented BCTs have been forming fewer advisor teams than initially intended by theater commanders’ requests, augmented BCTs may not need to be sourced with as many total advisor personnel or such large numbers of field grade advisors. Moving from transition teams to augmented BCTs to advise the Iraqi and Afghan security forces, driven, in part, by the need to address some of the challenges the Army faced in filling requirements for transition teams, has decreased the total number of advisors required for the advising mission and alleviated the strain on certain ranks, but increased the strain on others. Specifically, the shift to augmented BCTs has:  Decreased the total number of advisors required for the advising mission because, rather than relying completely on transition teams comprised of individually sourced personnel to man the advisor teams, the augmented BCT concept envisions advisor teams led by advisor augments (who are individually sourced) and further manned by pulling additional personnel from the brigade, as needed;  Alleviated the strain on the Army’s pool of company grade officers (e.g., Captains) and non-commissioned officers (e.g., Sergeants 1st class) because these ranks were required in greater numbers on the transition teams than the augmented BCTs; and Increased requirements for field grade officer advisors, since the ranks of the advisors required for augmented BCTs are generally higher than the ranks of transition team personnel—particularly in Iraq, where all advisors are field grade officers. For example, according to Army Human Resources Command data, augmented BCT advisor requirements increased demand for deployable field grade officers by 463 in fiscal year 2010 and by 398 in the first two quarters of fiscal year 2011. Deployable field grade officers were already in short supply prior to the introduction of the augmented BCT requirements. For example, taking into account requirements for augmented BCT advisor personnel, Army Human Resources Command data showed that the Army had shortages of 2,469 majors and 1,297 lieutenant colonels as of June 2011. To manage these shortages, the Army has prioritized the units and commands for sourcing personnel such that filling advisor requirements for augmented BCTs is among the highest sourcing priorities. As a result, Army Human Resources Command data showed that, as of October 2010, 97 percent of all advisor requirements for augmented BCTs were ultimately filled. However, the high priority for the augmented BCT advisor requirements, combined with the field grade officer shortages, has, at times, resulted in the understaffing of field grade ranks in other commands and units, such as U.S. Army Europe, Army Training and Doctrine Command, and units in South Korea, among many others. While the Army has been able to fill most requirements for augmented BCT advisor personnel, it has not always been able to provide advisors to the units within specified time frames. Army officials have told us that Army execution orders for augmented BCTs require that advisors join the augmented BCTs at least 45 days prior to the units’ mission rehearsal exercise. Army and augmented BCT officials have told us that early advisor arrival is critical to integrating the advisors into the unit, building advisor teams, and establishing key support and command and control relationships between the advisor teams and the BCT. Similarly, according to Army guidance, building the advisor teams as early as possible facilitates cohesion and trust. Given the shift in how the advising mission is being handled—from stand-alone transition teams operating independently to advisors who are integrated with and reliant on a BCT— these exercises help the augmented BCTs become comfortable with their structure and facilitate their missions once they are in theater. However, Army Human Resources Command has had difficulty providing the field grade officer advisors to the units being augmented in accordance with the 45-day time line because they were challenged by shortages of deployable field grade officers and changes in unit theater arrival and mission rehearsal exercise dates for operational reasons, which may shorten the time that Army Human Resources Command has to identify personnel who meet the requirements. Many of the augmented BCTs we met with did not receive the total number of advisor personnel that they would deploy with until after the mission rehearsal exercise. For example, one augmented BCT that we visited in Afghanistan told us that, prior to its exercise, it had received only six of its 24 non-commissioned officer advisors and none of its 24 field grade officer advisors, while another augmented BCT we visited in Afghanistan had received only one of its 22 field grade officer advisors that it ultimately deployed with prior to the exercise. In both instances, the units were limited in their ability to organize for and exercise the advising mission because they lacked the field grade officers necessary to lead the advisor teams. While recent Iraq-bound units have not received all of their advisors by the specified report date, the deployed augmented BCTs that we visited in Iraq had received most of their advisors—40 of 43 in one instance and 42 of 46 in the other—prior to their mission rehearsal exercises. Some officials suggested that, given the challenge of providing all the advisors to the augmented BCTs within specified time frames, it would be helpful if at least two or three of the highest-ranking advisors arrived significantly earlier than currently required to help integrate the advisors into the BCT’s mission and structure. For example, officials from some augmented BCTs as well as the 162nd Infantry Training Brigade suggested that the ideal would be for the highest-ranking advisors to arrive at the unit by the time that key brigade leadership planning events begin, such as the brigade’s Leader Training Program. These events typically occur as early as 90 days prior to the final mission rehearsal exercise. That would enable those leaders to represent the advising mission during brigade mission planning and to help mitigate some of the challenges related to integrating advisors, particularly late-arriving advisors, into the brigade. We met with an augmented BCT that received one of its highest-ranking advisors well before the 45-day window and in time for the brigade’s major leadership events. As a result, this advisor was able to integrate into the brigade’s leadership and provide inputs on the advising mission into the brigade’s mission planning. The advisor was also able to set up a structure for the other advisor personnel to integrate into when they arrived, develop the advisor teams, and facilitate the provision of equipment to advisors. Theater requests for the augmented BCTs assumed that (1) each BCT’s 48 advisors would form the base of 24 advising teams, and (2) all of the field grade officer advisors would be team leaders or deputy team leaders. However, as discussed above, augmented BCTs are sometimes operating with a smaller number of advisor teams that are comprised of a larger number of advisors. This could affect the necessary numbers and rank structure of advisor personnel since, with a smaller number of advisor teams being formed, the augmented BCTs may not need to be sourced with as many advisors. Further, since not as many advisors are serving as team chiefs or deputy team chiefs, BCTs may not need such large numbers of field grade officers. Army and augmented BCT officials have told us that rank is an important factor for advisors in establishing credibility with the Afghan and Iraqi officers that they are advising. However, with larger advising teams, the higher rank structure may be of less importance as all advisors may not have the leadership roles within the advisor teams that were envisioned when the rank structure requirements were initially established. Further, several augmented BCT officials told us that capable company grade officers, particularly when they are introduced by and lent the weight of the brigade and battalion leadership, can establish the necessary credibility with host nation leaders. Moreover, the augmented BCTs in Afghanistan are executing the advising mission with half as many field grade officers as augmented BCTs in Iraq—the request for augmented BCTs in Iraq required 48 field grade officers, versus 24 field grade officers in the request for augmented BCTs in Afghanistan. Given the identified field grade officer shortages that the Army is facing, re-assessing current requirements for field grade officer advisors is important to ensure that the Army is not being strained unnecessarily. Developing capable Iraqi and Afghan security forces is a key component of the U.S. military effort in Iraq and Afghanistan. Shifting from the use of individual transition teams comprised of advisors that operated somewhat independently to augmenting BCTs with advisor personnel that are an integral part of the BCT is a significant change in the way Army units perform the advising mission. As the Army continues to deploy augmented BCTs and theater commanders gain operational experience with these types of units, some challenges are emerging that suggest further refinements are needed to achieve greater unity of command and other benefits envisioned by the Army in moving to the augmented BCT concept. By reassessing needs and clarifying key requirements such as the appropriate number, rank, and capabilities of advisor personnel; the level of resources and support that the BCT should provide; and how the BCT should prioritize and balance demands associated with the advising mission with the demands of other BCT missions, the Army and theater commanders will enhance the ability of the BCTs to more effectively command and support the advisors. In addition, assessing and validating the appropriate composition of the advisor augment will ensure that the Army is providing the right mix of personnel needed for the advising mission. Lastly, integrating advisor personnel into the BCT is an important element of the augmented BCT concept and requires advisor and other BCT personnel to train together. Arranging for key leaders from the advisor augment to arrive in sufficient time to participate in leadership planning events would facilitate integration of the advisors and enable the units to maximize the benefits of the time spent in training. To enhance the ability of the augmented BCTs to support the advising mission and to facilitate the integration of advisor personnel into pre- deployment training, GAO is making the following three recommendations. We recommend that the Secretary of Defense, in consultation with Secretary of the Army and U.S. Central Command, direct that theater commanders in Iraq and Afghanistan:  Assess their needs for how advisor teams should be structured and supported and, based on this assessment, ensure that any future requests for augmented BCTs clearly define related requirements, including the number of advisors, ranks of advisors, capabilities of advisors, and equipment for advisors.  Clearly define, in guidance to divisions and augmented BCTs, the relative priority of the advising mission; the minimum level of transportation and security support to be provided to the advisors; and command and control relationships for augmented BCTs and their advisors, including the level of command that has tasking authority over and support responsibilities for the advisors. We recommend that the Secretary of the Army revise existing guidance to require that the highest-ranking field grade officer advisors join the augmented BCTs in time to be present for major brigade leadership planning events, such as the Leader Training Program. In written comments on a draft of this report, DOD concurred with our three recommendations. Overall, DOD stated that it believes that the information being sought in GAO’s first two recommendations related to more clearly defining requirements for advisors and the advising mission is being provided through established processes. The full text of DOD’s written comments is reprinted in appendix II. DOD concurred with our recommendation that the Secretary of Defense, in consultation with Secretary of the Army and U.S. Central Command, direct that theater commanders in Iraq and Afghanistan assess their needs for how advisor teams should be structured and supported and, based on this assessment, ensure that any future requests for augmented BCTs clearly define related requirements, including the number of advisors, ranks of advisors, capabilities of advisors, and equipment for advisors. In its comments, DOD stated that combatant commanders have provided and will continue to provide detailed requests for the advising mission. DOD stated that the Vice Chief of Staff of the Army has directed that commanders provide assessment of their needs regarding advisor team structure and support. DOD, therefore, stated that it saw no need for the Secretary of Defense to direct these actions. In our report, we acknowledge that the Army currently has an effort underway to examine the advisor requirements. As theater commanders revise their requirements to reflect the Army’s effort, we would expect that future requests for advising capabilities would more clearly define specific requirements, such as specialized advisor capabilities that are needed. DOD also concurred with our recommendation that the Secretary of Defense, in consultation with Secretary of the Army and U.S. Central Command, direct that theater commanders in Iraq and Afghanistan clearly define the relative priority of the advising mission, the minimum level of transportation and security support to be provided to the advisors, and command and control relationships for augmented BCTs and their advisors. In its comments, DOD stated that, as presented, our recommendation may be too prescriptive and, in of itself, impractical to implement. Specifically, DOD stated that our recommendation suggests that the priority of the vast number of mission requirements under the commander’s responsibility are static and can be determined void of any external factors. DOD stated that the recommendation’s intent is captured within existing departmental practices. DOD noted that the Department’s approach to determining mission priorities is based upon a thorough understanding of its strategic objectives within the area of operations. Based upon this understanding, DOD stated the commander gives his guidance through mission objectives and subsequent creation of operational plans. It noted that the commander’s ability to employ these plans, and thus identify mission priorities and allocation of resources, remains situation specific and environmentally dependent. DOD further stated that, for similar reasons, the command and control relationships within the BCT are situation dependent and are tailored based upon the commander’s requirements. We agree that DOD has an approach for developing operational plans and that commanders establish mission priorities and allocate resources based on specific situations and operating environments. We also agree that command and control relationships are situation dependent and need to reflect commanders’ requirements. As we state in our report, the Army has worked with theater commanders to define the key characteristics of augmented BCTs while leaving commanders the discretion to tailor the force as needed, and has provided guidance, accordingly. We do not agree, though, that our recommendation is too prescriptive or impractical to implement. Specifically, during our review, we found that in some cases, theater commanders did more clearly define some aspects of the advising mission, while in other cases they did not. In those latter cases, the lack of clarity led to some challenges, including with establishing priorities and command and control relationships. For example, as we state in our report, Iraq theater command officials made it clear that advising the Iraqi Security Forces was the primary mission of U.S. forces there, but the Afghanistan theater command has not established the relative priority for the advising mission. Likewise, we found that the theater commander’s request for augmented BCTs for Iraq included direction on the intended command and control structure of the advisors, but that the request for augmented BCTs for Afghanistan did not address this topic. Clarifying key requirements for augmented BCTs, including how the BCTs should prioritize and balance demands of the advising mission with the demands of the other BCT missions, will enhance the ability of the BCTs to more effectively command and support the advisors. DOD concurred with our recommendation that the Secretary of the Army revise existing guidance to require that the highest-ranking field grade officer advisors join the augmented BCTs in time to be present for major brigade leadership planning events. DOD stated that the Department of the Army agrees that maximum benefit is achieved when the entire augment of advisors is available and prepared to participate in both pre- deployment planning and training events. However, due to the nature of advisor force requirements, DOD’s comments noted that there will be instances where the entire augment is not available to participate. DOD stated that the Army will maximize coordination, prioritization, and integration of highest-ranking advisors to ensure participation in deployment planning and training events. We are sending copies of this report to appropriate congressional committees, the Chairman of the Joint Chiefs of Staff, the Secretary of Defense, and the Secretary of the Army. This report will be available at no charge on GAO’s Web site, http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9619 or by e-mail at pickups@gao.gov. Contact information for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who have made major contributions to this report are listed in appendix III. To determine the extent to which the Army has developed its concept for augmenting brigade combat teams (BCT) with additional personnel to support security force assistance missions we reviewed Army guidance, such as the Army field manual for security force assistance and the Modular Brigade Augmented for Security Force Assistance Handbook. We also reviewed advisor and augmented BCT training materials from the 162nd Infantry Training Brigade. Further, we analyzed the 2009 and 2010 requests for forces for augmented BCTs that were submitted by U.S. Central Command (CENTCOM) for ongoing operations in Iraq and Afghanistan to document advisor personnel requirements for augmented BCTs. We interviewed officials at the Office of the Secretary of Defense, CENTCOM, U.S. Special Operations Command, Joint Staff, Headquarters Department of the Army, U.S. Army Forces Command (FORSCOM), U.S. Army Human Resources Command (HRC), and the Army Capabilities Development Integration Directorate Maneuver Center of Excellence regarding the development of the augmented BCT concept, including how the BCTs were to be augmented, how command and control structures were intended to function, and what advantages, if any, the concept afforded the Army and theater commanders. We interviewed officials at the 162nd Infantry Training Brigade, as well as advisor augments with redeployed and currently deployed augmented BCTs in Iraq and Afghanistan in order to discuss the structure and content of the advisor training program for advisor augments. We interviewed officials at the Joint Readiness Training Center, as well as officials with redeployed and currently deployed augmented BCTs, in order to discuss the mission rehearsal exercise and its functionality for the augmented BCT. To determine the extent to which the Army has provided augmented BCTs for operations in Iraq and Afghanistan and what challenges, if any, these units have faced in implementing the concept, we reviewed Army unit deployment schedules, after action reviews and lessons learned from redeployed augmented BCTs, and mission briefings from deployed augmented BCTs and division commanders, dating back to 2009. We also analyzed the above-mentioned requests for forces submitted by CENTCOM for augmented BCTs to document advisor personnel and equipment requirements for augmented BCTs and guidance provided by theater commanders on augmented BCT and advisor task organization, advisor support, advisor command and control, and augmented BCTs roles, missions, and priorities. Additionally, we reviewed key documents related to the advising mission and priorities from theater commanders in Iraq and Afghanistan. Furthermore, we conducted interviews with a range of deployed and redeployed BCTs that had served or were serving as augmented BCTs in Iraq and Afghanistan. We interviewed augmented BCT officials and advisor personnel regarding augmented BCT task organization, advisor team formation, the integration of advisors into the brigade, the suitability of advisor personnel capabilities, the ability of the brigade to support advisor teams, the equipping requirements for advisor augments, and the guidance received by the brigade on the augmented BCTs’ roles and missions. In addition, we met with theater command- and division-level officials in Iraq and Afghanistan to discuss the execution of the augmented BCT mission in their respective theaters and areas of operation, and management of and guidance provided to augmented BCTs on the advising mission. We also interviewed officials at Headquarters Department of the Army, CENTCOM, FORSCOM, and 162nd Infantry Training Brigade for their perspectives on how the augmented BCT concept is being executed in theater and any related challenges. To determine the extent to which requirements for augmented BCTs have impacted overall Army personnel requirements, including the Army’s ability to provide advisor personnel to BCTs in required time frames, we examined data provided to us by HRC regarding Army shortfalls faced in certain officer ranks currently and in coming years. We also discussed with HRC officials how this data was calculated, including the details of how they determined the fill rate for advisor requirements, overall Army field grade officer shortages, and extent to which requirements for augmented BCTs increased overall Army requirements for field grade officers. We found this data to be reliable for the purpose of determining the impact of advisor requirements on overall Army personnel requirements. To gain an understanding of the extent to which BCTs are experiencing late arrival of advisor augment personnel, we conducted analysis of advisor fill rate and arrival time data provided by HRC, FORSCOM and augmented BCTs, dating back to 2009, and comparing such data against the arrival timelines laid out in the requests for forces for each theater. We also met with officials from Headquarters Department of the Army, HRC, FORSCOM, Joint Forces Command, Office of the Secretary of Defense Personnel and Readiness, 162nd Infantry Training Brigade, and redeployed and currently deployed augmented BCTs to discuss the impact of advisor personnel requirements on overall Army personnel requirements, the Army’s ability to provide authorized numbers of augment personnel within the specified arrival time frames, and any challenges faced as a result of the late arrival of advisor augments to the BCTs to which they have been assigned. Table 1 below identifies the organizations, offices, commands, and units that we contacted during our review, including the units and commands we met with in Iraq and Afghanistan. To perform its review, we reviewed an illustrative, non-generalizable sample of redeployed and deployed augmented BCTs. We met with three of the four augmented BCTs that had returned from Iraq and the only augmented BCT that had returned from deployment in Afghanistan at the time that we selected our sites for visits. We also met with deployed augmented BCTs in Iraq and Afghanistan, as well as theater commands and deployed division commands. We selected deployed BCTs for visits based on where they were in their deployments (we aimed for BCTs that were at the midpoints of their deployments so that they had been in theater long enough to be familiar with their missions, but not yet at the point where they were preparing to redeploy). We worked with theater commands in Iraq and Afghanistan to arrange visits or meetings with deployed BCTs that fit our criteria, making adjustments as needed because of security, transportation, or weather issues. Ultimately, we met with personnel from two augmented BCTs and two divisions in Iraq and personnel from five augmented BCTs and two divisions in Afghanistan. We conducted this performance audit from July 2010 through August 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, key contributors to this report were James Reynolds (Assistant Director), Grace Coleman, Kasea Hamar, Jonathan Mulcare, and Maria Storts.
Developing capable Iraqi and Afghan security forces is a key component of the U.S. military effort in Iraq and Afghanistan and, in 2009, the Army began augmenting brigade combat teams (BCT) with advisor personnel to advise the host nation security forces in these countries. House Armed Services Committee report 111-491 directed GAO to report on the Army's plans to augment BCTs to perform advising missions in Iraq and Afghanistan. This report (1) identifies the key characteristics of the augmented BCT concept; (2) assesses the extent to which the Army has provided augmented BCTs, and what challenges, if any, these units have faced; and (3) assesses the extent to which requirements for augmented BCTs have impacted overall Army personnel requirements, including the Army's ability to provide advisor personnel. GAO examined augmented BCT doctrine and guidance, analyzed advisor requirements, reviewed after- action reviews and lessons learned from augmented BCTs, and interviewed Army, theater command, and augmented BCT officials. Army guidance identifies key characteristics of the augmented BCT concept, such as how advisors are to be organized, commanded, and supported. For example, BCT commanders are to organize the advisors into teams, with other necessary resources being provided to the teams by the brigade. The theater commander determines the specific numbers and types of advisors based upon the operational environment and mission requirements. BCTs are envisioned to exercise command of advisor teams and provide support such as specialized personnel, equipment, and transportation and security. However, it is recognized that BCTs may have other priorities and must balance the demand for resources between the advising mission and other missions. The Army has deployed augmented BCTs in response to theater commanders' requests, but units have faced some challenges because commanders did not always set clear priorities between the advising mission and other missions or define specific requirements for how the BCTs should support the advising mission. For example, theater commanders did not require that advisor teams include specialized personnel, such as logisticians or intelligence officers. Because the BCTs already have high demand for these personnel, the brigades are challenged to meet the advisors' requirements for those same personnel. As a result, some advising teams told GAO that they were limited in their ability to advise in some specialty areas or that they may be advising Iraqi and Afghan security forces in functional areas where the advisors have little or no experience. Also, theater commanders' requests did not always specify command relationships. As a result, in some cases, advisors were reassigned to the control of a division or a brigade that they had not trained and deployed with, which disrupted the unity of command envisioned under the augmented BCT concept. The use of augmented BCTs has decreased the total number of advisor personnel required for the advising mission, but increased requirements for field grade officers, already in short supply. According to Army officials, as a result of these shortages, the Army has faced challenges meeting the requirement to provide field grade advisors at least 45 days prior to the brigades' mission rehearsal exercise. In many cases, advisors did not join the brigades until after the exercise, hindering their integration into the BCTs and complicating efforts to establish support and command structures. Some officials suggested that it would be helpful if at least two or three of the highest-ranking advisors arrived significantly earlier than currently required in order to facilitate integration. Moreover, GAO found that augmented BCTs are organizing their advisors into smaller numbers of larger teams than envisioned in the theater commander requirements. As a result, augmented BCTs may not need the number and rank of advisors currently required by those requests. GAO recommends that theater commands assess and refine, as appropriate, advisor requirements and define advisor support and command structures. GAO also recommends that the Army provide certain advisor personnel to brigades earlier in pre-deployment training. DOD concurred with the recommendations.
The Homeland Security Act of 2002 created DHS and gave the department wide-ranging responsibilities for, among other things, leading and coordinating the overall national critical infrastructure protection effort. Homeland Security Presidential Directive (HSPD) 7 further defined critical infrastructure protection responsibilities for DHS and SSAs. HSPD-7 directed DHS to establish uniform policies, approaches, guidelines, and methodologies for integrating federal infrastructure protection and risk management activities within and across CIKR sectors. Various other statutes and directives provide specific legal authorities for both cross sector and sector-specific protection and resiliency programs. For example, the Public Health Security and Bioterrorism Preparedness and Response Act of 2002 was enacted to improve the ability of the United States to prevent, prepare for, and respond to acts of bioterrorism and other public health emergencies, and the Pandemic and All-Hazards Preparedness Act of 2006 addresses, among other things, public health security and all-hazards preparedness and response. Also, the Cyber Security Research and Development Act, enacted in January 2002, authorized funding through fiscal year 2007for the National Institute of Standards and Technology and the National Science Foundation to facilitate increased research and development for computer and network security and to support related research fellowships and training. CIKR protection issues are also covered under various presidential directives, including HSPD-5 and HSPD-8. HSPD-5 calls for coordination among all levels of government as well as between the government and the private sector for managing domestic incidents, and HSPD-8 establishes policies to strengthen national preparedness to prevent, detect, respond to, and recover from threatened or actual domestic terrorist attacks, major disasters, and other emergencies.directives are tied together as part of the national approach for CIKR protection through the unifying framework established in HSPD-7. According to the NIPP, these separate authorities and NPPD’s IP is responsible for working with public and private sector CIKR partners in the 18 sectors and leads the coordinated national effort to mitigate risk to the nation’s CIKR through the development and implementation of CIKR protection and resilience programs. Using a sector partnership model, IP’s Partnership and Outreach Division works with sector representatives, including asset owners and operators, to develop, facilitate, and sustain strategic relationships and information sharing. IP’s Protective Security Coordination Division (PSCD) provides programs and initiatives to enhance CIKR protection and resilience and reduce risk associated with all-hazards incidents. In so doing, PSCD works with CIKR owners and operators and state and local responders to (1) assess vulnerabilities, interdependencies, capabilities, and incident consequences; (2) develop, implement, and provide national coordination for protective programs; and (3) facilitate CIKR response to and recovery from incidents. Related to these efforts, PSCD has deployed the aforementioned PSAs in 50 states and Puerto Rico, with deployment locations based on population density and major concentrations of CIKR. In these locations, PSAs are to act as the link between state, local, tribal, and territorial organizations and DHS infrastructure mission partners and are to assist with ongoing state and local CIKR security efforts by establishing and maintaining relationships with state, local, tribal, territorial, and private sector organizations; support the development of the national risk picture by conducting vulnerability and security assessments to identify security gaps and potential vulnerabilities in the nation’s most critical infrastructures; and share vulnerability information and protective measure suggestions with local partners and asset owners and operators. As part of their ongoing activities, PSAs are responsible for promoting the ECIP Initiative. Launched in September 2007, the ECIP Initiative is a voluntary program focused on forming or maintaining partnerships between DHS and CIKR owners and operators of high-priority level 1 and level 2 assets and systems, as well as other assets of significant value. According to DHS guidance, PSAs are to schedule ECIP visits with owners and operators in their districts using lists of high-priority and other significant assets provided by PSCD each year, with visits to level 1 assets being the first priority, and visits to level 2 assets being the second priority. Visits to other significant assets are to receive subsequent priority based on various factors, including whether they are of significant value based on the direction of IP; have been identified by the state homeland security office; or represent a critical dependency associated with higher- priority assets already identified. If an asset owner or operator agrees to participate in an ECIP visit, PSAs are to meet with the owner or operator to assess overall site security, identify gaps, provide education on security, and promote communication and information sharing among asset owners and operators, DHS, and state governments. One of the components of the ECIP Initiative is the security survey, formally called the Infrastructure Survey Tool, which a PSA can use to gather information on the asset’s current security posture and overall security awareness. If the asset owner or operator agrees to participate in the security survey, the PSA works with the owner or operator to apply the survey, which assesses more than 1,500 variables covering six major components—information sharing, security management, security force, protective measures, physical security, or dependencies—as well as 42 more specific subcomponents within those categories. For example, within the category “physical security” possible subcomponents include fences, gates, parking, lighting, and access control, among others. Once the survey is complete, the PSA submits the data to Argonne National Laboratory, which analyzes the data to produce protective measures index scores ranging from 0 (low protection) to 100 (high protection) for the entire asset and for each component of the survey. Argonne National Laboratory also uses the data to produce a “dashboard”—an interactive graphic tool that is provided to the asset owner or operator by the PSA. The dashboard displays the asset’s overall protective measures score, the score for each of the six major components, the mean protective measures score and major component scores for all like assets in the sector or subsector that have undergone a security survey, and high and low scores recorded for each component for all sector or subsector assets that have undergone a security survey. The asset score and the scores for other like assets show the asset owner or operator how the asset compares to similar assets in the sector. The asset owner can also use the dashboard to see the effect of making security upgrades to its asset. For example, if the dashboard shows a low score for physical security relative to those of other like assets, the owner or operator can add data on perimeter fencing to see how adding or improving a fence would increase the asset’s score, thereby bringing it more in line with those of other like assets. Figure 1 provides an example of the dashboard produced as a result of the security survey. Related to these security surveys, DHS also produced, from calendar years 2009 through 2011, summaries of the results of the security surveys related to sector or subsector security postures, known as sector summaries. These sector summaries were provided directly to SSAs in 2009 and 2010, and according to program officials were made available to SSAs in 2011 for sectors upon request. Unlike the summaries in past years, the 2011 summaries also included an “options for consideration” section that identified specific protective measures that had been adopted by the top 20 percent of assets in the sector or subsector as measured by the overall protective measures score. DHS also uses vulnerability assessments to identify security gaps and provide options for consideration to mitigate these identified gaps. These assessments are generally on-site, asset-specific assessments conducted at the request of asset owners and operators. As of September 30, 2011, DHS had conducted more than 1,500 vulnerability assessments. Generally, vulnerability assessments are conducted at individual assets by IP assessment teams in coordination with PSAs, SSAs, state and local government organizations (including law enforcement and emergency management officials), asset owners and operators, and the National Guard, which is engaged as part of a joint initiative between DHS and the National Guard Bureau (NGB). These assessment teams are staffed via an interagency agreement between DHS and NGB and include two national guardsmen—a physical security planner and a systems analyst, one of whom serves as the team lead. They may also be supplemented by contractor support or other federal personnel, such as PSAs or subject matter experts, when requested. Argonne National Laboratory staff then finalize the vulnerability assessment report—which includes options for consideration to increase an asset’s ability to detect and prevent terrorist attacks and mitigation options that address the identified vulnerabilities of the asset—and provide it to the PSA for delivery. The asset owners and operators that volunteer for the vulnerability assessments are the primary recipients of the analysis. The vulnerability assessment is developed using a questionnaire that focuses on various aspects of the security of an asset, such as vulnerabilities associated with access to asset air handling systems, physical security, and the ability to deter or withstand a blast or explosion. The vulnerability assessment report also contains a section called “options for consideration” where DHS makes suggestions to improve asset security or reduce identified vulnerabilities. For example, one vulnerability assessment report made suggestions to the asset owners or operators to explore the option of installing additional cameras to improve video surveillance in certain locations, install additional barriers to prevent vehicles from entering the facility at high speeds, and increase the training of its security staff. DHS revised the vulnerability assessment methodology in 2010 to enhance the analytical capabilities of IP. According to DHS officials, vulnerability assessments developed prior to 2010 did not have a consistent approach for gathering data on assets and did not produce results that were comparable from asset to asset. They also did not incorporate an approach for assessing asset resilience. DHS reported that the revised vulnerability assessment is intended to incorporate about 75 percent of the questions currently asked during an ECIP security survey, including questions on resilience, to bring the tool more in line with the security survey. As a result, vulnerability assessments completed beginning in 2011 have the capability to produce a dashboard similar to that produced from security surveys. By revising the assessment methodology, DHS intends to ensure that the data collected during the vulnerability assessment can be compared within and across sectors and subsectors while still providing each asset an assessment specific to that asset, including options for consideration to reduce vulnerability. While not the focus of this review, DHS has developed the Regional Resiliency Assessment Program (RRAP) to assess vulnerability and risk associated with resiliency. The RRAP is an analysis of infrastructure “clusters,” regions, and systems in major metropolitan areas that uses security surveys and vulnerability assessments, along with other tools, in its analysis. The RRAP evaluates CIKR on a regional level to examine vulnerabilities, threats, and potential consequences from an all-hazards perspective to identify dependencies, interdependencies, cascading effects, resiliency characteristics, and gaps. The RRAP assessments are conducted by DHS officials, including PSAs in collaboration with SSAs; other federal officials; state, local, territorial, and tribal officials; and the private sector depending upon the sectors and assets selected as well as a resiliency subject matter expert(s). The results of the RRAP are to be used to enhance the overall security posture of the assets, surrounding communities, and the geographic region covered by the project and is shared with the state. According to DHS officials, the results of specific asset-level assessments conducted as part of the RRAP are made available to asset owners and operators and other partners (as appropriate), but the final analysis and report are delivered to the state where the RRAP occurred. Further, according to DHS, while it continues to perform surveys and assessments at individual assets, prioritizing efforts to focus on regional assessments allows DHS to continue to meet evolving threats and challenges. DHS conducted about 2,800 security surveys and vulnerability assessments during fiscal years 2009 through 2011. In so doing, DHS directed PSAs to contact owners and operators of high-priority assets to offer to conduct voluntary security surveys and vulnerability assessments at their assets and PSAs used these as part of their outreach efforts among these assets. However, DHS faces challenges tracking whether security surveys and vulnerability assessments have been performed at high-priority assets. Furthermore, DHS has not developed institutional performance goals that can be used to measure the extent to which owners and operators of high-priority assets participate in security surveys and vulnerability assessments. In addition, DHS is not positioned to assess why some high-priority asset owners and operators decline to participate in these voluntary surveys and assessments so that it can develop strategies for increasing participation. DHS is not positioned to track the extent to which it is conducting security surveys and vulnerability assessments on high-priority assets because of inconsistencies between the databases used to identify high-priority assets and to identify surveys and assessments completed. Consistent with the NIPP, DHS prioritizes the participation of high-priority assets in its voluntary security survey and vulnerability assessment programs and uses the NCIPP list of high-priority assets to guide its efforts. In February 2011, DHS issued guidance to PSAs that called for them to form partnerships with owners and operators of high-priority assets in their areas. Under the guidelines, PSAs are to use NCIPP lists of high- priority assets to identify and contact owners and operators of the these assets in their areas that could benefit from participation in the security surveys, for the purpose of reducing potential security vulnerabilities and identifying protective measures in place. PSAs are to conduct outreach directly by meeting with the asset owners and operators to provide information about DHS efforts to improve protection and resiliency, sharing information about how an asset owner or operator can request a vulnerability assessment, and offering to conduct a security survey. If the owner or operator agrees to a visit from the PSA, the PSA is to record the date of the visit, and if the owner or operator agrees to participate in a security survey or vulnerability assessment, the PSA is likewise to record the day the security survey or vulnerability assessment was conducted. DHS analysts are then required to record the data provided by the PSAs in DHS’s Link Encrypted Network System (LENS) database—DHS’s primary database for tracking efforts to promote and complete security surveys and annual assessments. According to DHS guidelines, these data are subject to weekly reviews to ensure that data recorded in LENS are accurate, consistent, and complete. Thus, data on each individual asset should be recorded so that asset sector, name, and physical address reflect a single asset in a specified location throughout the database. For example, according to the guidelines, asset names recorded in LENS should not be recorded with stray asterisks, other special characters, and notes, and to the extent possible, address fields, such as “St” should be captured as “Street.” To determine how many of these activities have been conducted on high- priority assets, we used an automated statistical software program to compare data on security surveys and vulnerability assessments completed in DHS’s LENS database with data on high-priority assets on the NCIPP lists for fiscal years 2009 through 2011—the lists PSAs are to use to contact officials representing high-priority assets in their areas. Out of 2,195 security surveys and 655 vulnerability assessments conducted during fiscal years 2009 through 2011, we identified a total of 135 surveys and 44 vulnerability assessments that matched assets on the NCIPP lists of high-priority assets. We also identified an additional 106 security surveys and 23 vulnerability assessments that were potential matches with assets on the NCIPP lists of priority assets, but we could not be certain that the assets were the same because of inconsistencies in the way the data were recorded in the two different databases. For example, we found instances where assets that appeared to be the same company or organization were listed in different sectors. We also encountered instances where names of companies at the same address did not match exactly or where companies with the same names had slightly different addresses in the two databases. For example, an asset at 12345 Main Street in Anytown, USA, might appear as ABC Company on one list and ABC on another. Conversely, we also found instances where company names appeared to be the same or similar on both lists, but they were listed at different street addresses or on different streets. In this case, for example, ABC Company might appear as being located on Main Street on one list, and E. Main St. on another. We contacted DHS officials responsible for maintaining the LENS database and the NCIPP list and told them that we had encountered difficulty matching company names and addresses in the two lists. We explained that our results depended on an asset being described in a similar manner—same name, same address, same sector—in both the NCIPP and LENS databases. These officials acknowledged that the two databases do not match and explained that they have had to match the data manually because of the inconsistencies. Specifically, DHS reported that it reviewed over 10,000 records—including records of security surveys, vulnerability assessments, and the NCIPP lists for fiscal years 2009 through 2011—and manually matched assets that had participated in surveys or assessments with the NCIPP lists of high-priority assets using DHS officials’ knowledge of the assets. Based on its efforts, DHS analysts provided a table that showed that DHS conducted 2,128 security surveys and 652 vulnerability assessments, of which it identified 674 surveys and 173 assessments that were conducted on high-priority assets. Thus, by manually matching assets across the two lists, DHS was able to show that the percentage of high-priority assets surveyed and assessed increased significantly. Table 1 illustrates the results of our efforts to match the data using an automated software program and the results of DHS’s efforts to manually match the data. DHS officials noted that beginning with the fiscal year 2012 NCIPP lists, they have begun to apply unique numerical identifiers to each asset listed in LENS and the NCIPP lists. According to these officials, once a match is made, the application of unique identifiers to the same assets in both databases is intended to remove uncertainty about which asset is which, regardless of variations in the name or address of the asset. Related to this, DHS officials also said that they have initiated a quality assurance process whereby they use descriptive data—such as geographic coordinates (longitude and latitude)—to verify street addresses and names, thereby giving IP the ability to more readily make matches in those instances where it may have previously experienced difficulty doing so. Nonetheless, they said that the NCIPP list continues to present matching challenges because there have been “significant” changes in the NCIPP list from year to year, but they anticipate fewer changes in the future. Most recently, the format and the organization of the list has changed to focus on clusters—groups of related assets that can be disrupted through a single natural or man-made hazard, excluding the use of weapons of mass destruction—rather than on individual assets. Thus, some assets previously considered high priority as stand-alone assets are now listed as part of a system or as clusters that in and of themselves are no longer considered high priority. According to DHS officials, the introduction of clusters has resulted in other data matching challenges, including the duplicate entry of an NCIPP asset that spans two states; multiple entries for a single asset that is listed both individually and in relation to a cluster or a system, and multiple entries for a single asset within several clusters or systems. DHS officials added that with the assignment of the unique identifier, they expect to be better positioned to cross-reference their program activities with the NCIPP list. DHS officials have stated that the discrepancies between our analyses and the analysis performed by IP, as well as the confusion created by factors such as changing data sets, made it clear that improvements should be made in the collection and organization of the data. Accordingly, DHS officials said that they are continuing to work with various partners within DHS and its contractors to streamline and better organize the list of high-priority assets and data associated with assessments, surveys, and other IP field activities. However, DHS did not provide milestones and time frames for completing these efforts. DHS appears to be heading in the right direction in taking actions to resolve many of the issues we identified with regard to matching data and data inconsistencies. However, moving forward, DHS would be better positioned if it were to develop milestones and time frames for its plans to accomplish these tasks. Standard practices for project management state that managing a project involves, among other things, developing a timeline with milestone dates to identify points throughout the project to reassess efforts under way to determine whether project changes are necessary. By developing time frames and milestones for streamlining and organizing the lists of high-priority assets and data associated with surveys, assessments, and field activities, DHS would be better positioned to provide a more complete picture of its approach for developing and completing these tasks. It also would provide DHS managers and other decision makers with insights into (1) IP’s overall progress in completing these tasks and (2) a basis for determining what, if any, additional actions need to be taken. As DHS moves forward to improve its efforts to track the hundreds of security surveys and vulnerability assessments it performs each year, DHS could also better position itself to measure its progress in conducting these surveys and assessments at high-priority assets. We have previously reported that to efficiently and effectively operate, manage, and oversee programs and activities, agencies need reliable information during their planning efforts to set realistic goals and later, as programs are being implemented, to gauge their progress toward achieving those goals. In July 2011, the PSCD Deputy Director told us that PSCD had a goal that 50 percent of the security surveys and vulnerability assessments conducted each year be on high-priority assets. However, this goal was not documented; PSCD did not have written goals and the results to date indicate that this goal was not realistic. Specifically, according to DHS’s 2010 NAR, less than 40 percent (299 of 763) of security surveys were conducted on high-priority assets from May 1, 2009, through April 30, 2010. For the same time period, DHS’s NAR reported that about 33 percent (69 of 212) of vulnerability assessments were conducted on high- priority assets. Setting institutional realistic goals for the number of security surveys and vulnerability assessments conducted at high-priority assets—consistent with DHS’s efforts to improve its data on these assets—would enable DHS to better measure its performance and assess the state of security and resiliency at high-priority facilities, across the 18 sectors, over time. For example, if there is a high-priority list consisting of 2,000 facilities, a DHS goal of 500 security surveys and vulnerability assessments conducted on high-priority facilities annually would allow for the potential assessment of all high-priority facilities over a defined period of time. Therefore, DHS could be in a better position to identify security strengths and weaknesses at high-priority facilities and within and across sectors and target areas for improvement. Consistent with HSPD-7, DHS pursues a voluntary approach to critical infrastructure protection and coordination. DHS officials told us that many of these assets do not receive voluntary surveys and assessments conducted by PSCD. Rather, as we previously reported, PSCD staff told us that they work with the responsible federal entity, such as the U.S. Coast Guard and the Nuclear Regulatory Commission, to identify and address vulnerabilities. Finally, according to the PSCD Deputy Director, shifting priorities based on terrorist threat information, budget constraints, and other department wide priorities, affect the prioritization and distribution of assets participating in these voluntary programs. For example, DHS officials stated that given DHS is placing increased emphasis on regional activities, such as RRAPs, voluntary surveys and assessments are not necessarily focused on individual high-priority assets. They said that expanded focus on regional activities enables IP to meet evolving threats and challenges, but in a budget constrained environment, forces them to prioritize activities so that they can leverage existing resources. See GAO/AIMD-00-21.3.1. Standards for Internal Control in the Federal Government also calls for accurate and timely recording of information and periodic record reviews to help reduce the risk of errors. DHS officials told us that they conduct data quality checks and DHS guidelines direct such actions. However, the extent to which data were inconsistent indicates that information was not always accurately captured. process to enable DHS to objectively and quantitatively assess improvement in CIKR protection and resiliency. Specifically, the NIPP states that performance metrics allow NIPP partners to track progress against these priorities and provide a basis for DHS to establish accountability, document actual performance, promote effective management, and provide a feedback mechanism to decision makers. Consistent with the NIPP risk management framework, our past work has shown that leading organizations strive to align their activities to achieve mission-related goals. By using LENS and NCIPP data to establish performance goals, DHS could also be better positioned to identify gaps between expected and actual participation, track progress in achieving higher levels of participation, and ultimately gauge the extent to which protection and resiliency are enhanced for the nation’s most critical assets. Relying on institutional goals rather than informal goals would also provide assurance that DHS has a common framework for measuring performance in the face of organizational or personnel changes over time. DHS guidelines issued in February 2011 call for PSAs to document the names and addresses of CIKR asset owners or operators that decline to participate in security survey outreach activities as well as the reasons they declined. DHS officials told us that currently they track aggregate data on declinations but they do not document the reasons why asset owners and operators decline to participate in the security survey and vulnerability assessment programs. In November 2011, DHS provided a list of 69 asset owners or operators that PSAs recorded as having declined to participate in the security surveys from March 2009 through 2011, but these records did not identify reasons for the declinations. Program officials told us that the tool with which they collect declination information is not designed to capture such information. The Deputy Director for PSCD said that in 2012, DHS is developing a survey tool that PSAs can use to record why asset owners or operators decline to participate. Nonetheless, DHS could not provide specifics as to what would be included in the tool, which office would be responsible for implementing it, or time frames for its implementation. Rather, officials told us that they intend to use the results of our review to inform improvements to the process. Regarding vulnerability assessments, the assessment guidance is silent on whether PSAs are to discuss declinations with asset owners and operators and why they declined. However, PSCD issued guidance in January 2012 that states that the vulnerability assessment guidance is designed to complement the ECIP guidance issued in February 2011. In our survey of PSAs, PSA respondents provided some anecdotal reasons as to why asset owners and operators may decline to participate. For example, when asked how often they had heard various responses from asset owners and operators that declined to participate in security surveys or vulnerability assessments, PSAs responded that reasons for declinations can include (1) the asset was already subject to federal or state regulation or inspections, (2) the identification of security gaps could render the owner of the asset liable for damages should an incident occur, or (3) assets owner or operator had concerns that the information it provides will not be properly safeguarded by DHS. Figure 2 shows the frequencies of PSA responses of either “often” or “sometimes” to our survey question about the various reasons for declinations that they have heard. Appendix III shows the results of our survey in greater detail. While these PSA perceptions may reflect some reasons asset owners and operators decline to participate, it is important that DHS systematically identify reasons why high-priority asset owners and operators may decline to participate, especially if reasons differ from PSA region to PSA region or by sector or subsector. By doing so, DHS may be able to assess which declinations are within DHS’s ability to control or influence and strategize how the security survey and vulnerability assessment program and DHS’s approach toward promoting it can be modified to overcome any barriers identified. For example, 39 percent (31 of 80) of the PSAs who responded to our survey suggested that senior- level partners, including senior leaders within DHS, could better support the promotion of the security survey program when those leaders interact with CIKR partners at high-level meetings. According to DHS, NPPD and IP officials meet often with nonfederal security partners, including sector coordinating councils (SCC), industry trade associations, state and local agencies, and private companies, to discuss the security survey and vulnerability assessment and other programs to assist in educating mission partners about the suite of available IP tools and resources. Meeting with security partners to discuss IP’s surveys, assessments, and other programs is consistent with the NIPP partnership model whereby DHS officials in headquarters are to promote vulnerability assessments at high-level meetings where corporate owners are present—such as at SCC or Federal Senior Leadership Council meetings—and through the SSAs responsible for sector security. The NIPP also calls for DHS to rely on senior-level partners, such as the SCCs and state representatives, to create a coordinated national framework for CIKR protection and resilience within and across sectors and with industry representatives that includes the promotion of risk management activities, such as vulnerability assessments. Given the barriers to participation identified in our PSA survey, we contacted officials with 12 industry trade associations representing the water, commercial facilities, dams, and energy sectors to get their views on their awareness of DHS security surveys and vulnerability assessments. Officials representing 10 of the 12 trade associations said that they were aware of DHS’s voluntary survey and vulnerability assessment programs, but only 6 of 12 knew if some of their members’ had participated in these programs. As noted earlier, at the time of our review DHS was not systematically collecting data on reasons why some owners and operators of high- priority assets decline to participate in security surveys or vulnerability assessments. Officials stated that they realize that some of the data necessary to best manage these programs are not currently being collected and said that one example is that PSAs are not consistently reporting assessment and survey declinations from assets. DHS officials added that in an effort to increase efficiency and accuracy, they are developing additional data protocols to ensure that all the applicable data are being collected and considered to provide a more holistic understanding of the programs. Given that DHS efforts are just beginning, however, it is too early to assess the extent to which they will address these data collection challenges. Nevertheless, by developing a mechanism to systematically collect data on the reasons for declinations, consistent with DHS guidelines, DHS could be better positioned to identify common trends for such declinations, determine what programmatic and strategic actions are needed to manage participation among high-priority assets, and develop action plans with time frames and milestones to serve as a road map for addressing any problems. This could enhance the overall protection and resilience of those high-priority CIKR assets crucial to national security, public health and safety, and the economy. Given that DHS officials recognize the need to collect these data to obtain a more holistic understanding of these programs, DHS could be better positioned if it had a plan, with time frames and milestones, for developing and implementing these protocols. Standard practices for project management state that managing a project involves, among other things, developing a plan with time frames and milestones to identify points throughout the process to reassess efforts under way to determine whether project changes are necessary. By having plan with time frames and milestones for developing additional data protocols, IP could be better positioned to provide a more complete picture of its effort to develop and complete this task. This could also provide DHS managers and other decision makers with (1) insights into IP’s overall progress and (2) a basis for determining what, if any, additional actions need to be taken. DHS shares security survey and vulnerability assessment information with asset owners and operators that participate in these programs and shares aggregated sector information with SSAs. However, DHS faces challenges ensuring that this information is shared with asset owners and operators in a timely manner and in providing SSAs security survey- derived products that can help SSAs in their sector security roles. According to DHS officials, they are working to overcome these challenges, but it is unclear whether DHS actions will address SSA concerns about the use of aggregate security survey data. DHS security surveys and vulnerability assessments can provide valuable insights into the strengths and weaknesses of assets and can help asset owners and operators make decisions about investments to enhance security and resilience. For example, our survey of PSAs showed that most PSAs believe that the survey dashboard and the vulnerability assessment were moderately to very useful tools for reducing risk at CIKR assets. Specifically, 89 percent of PSAs (71 of 80) and 83 percent of PSAs (66 of 80) responded that the security surveys and vulnerability assessments, respectively, were moderately to very useful products for reducing risk. One PSA commented that “The dashboard is the first tool of its kind that allows the owner/operator a clear and measurable quantitative picture of existing security profile” while another commented that “ provide specific, actionable items for the owner/operator to take action on to decrease vulnerabilities.” Our discussions with various CIKR stakeholders—specifically asset owners and operators and SSA representatives—also showed that these tools can be useful to the asset owners and operators that participate in these programs. As will be discussed later in greater detail, 6 of the 10 asset owners and operators we contacted used the results of these survey and assessment tools to support proposals for security changes at the assets that had been assessed. As one owner and operator said, these voluntary programs provide a fresh look at facility security from a holistic perspective. Another asset operator told us that it is nice to be able to see how its security practices compare to those of others within its sector. The representatives of the four SSAs we spoke with also believe the security survey and vulnerability assessments were beneficial to the asset owners and operators that received them. The usefulness of security survey and vulnerability assessment results could be enhanced by the timely delivery of these products to the owners and operators that participated in them. For example, facility owners may not see the importance of an identified security weakness if they do not receive this information soon after a security survey or vulnerability assessment is completed. Furthermore, the inability to deliver results within the expected time frame could undermine the relationship DHS is attempting to develop with asset owners and operators. As mentioned earlier, PSAs rely on Argonne National Laboratory to provide them with the results of the vulnerability assessments, which PSAs, in turn, deliver directly to asset owners and operators. While PSAs find the voluntary programs useful, 14 percent of PSAs we surveyed (11 of 80) described late delivery of the reports as a factor that undermines the usefulness of vulnerability assessments. One PSA commented that “the program is broken in regard to timely completion of reports and deliverables (protective measures and resiliency dashboards) for the asset owners/operators. I have yet to receive anything from (a vulnerability assessment conducted several months ago). I have not even received the draft report for review nor the dashboard. This creates a big credibility problem for me with my stakeholders who are looking for the results.” The NIPP states that in order to have an effective environment for information sharing, CIKR partners need to be provided with timely and relevant information that they can use to make decisions. Consistent with the NIPP, DHS guidelines state that PSAs are to provide the results of security surveys in the form of a survey dashboard within 30 days of when the security survey was completed. In addition, according to PSCD officials, although there is no written guidance, PSCD expects that vulnerability assessment results are to be provided to assets within 60 days of completion of the vulnerability assessment. We analyzed DHS LENS data to determine the extent to which survey dashboards were delivered to asset owners and operators on a timely basis, using DHS’s 30-day criteria for timeliness. Our analysis showed that for fiscal year 2011, more than half of all dashboards and vulnerability assessment reports were delivered to owners and operators late. Specifically, of the 570 dashboard reports that were supposed to be delivered during fiscal year 2011, about 24 percent (139 of 570) were delivered on time and approximately 60 percent (344 of 570) were late, with almost half of those delivered 30 days beyond the 30-day deadline established by DHS guidelines. Data were missing for about 15 percent (85 of 570) of the remaining dashboards.of dashboard deliveries for all security surveys conducted in fiscal year 2011. DHS has taken actions to determine whether asset owners or operators have made security improvements based on the results of security surveys. However, DHS has not developed an overall approach to determine (1) the extent to which changes have enhanced asset protection and resilience over time or (2) why asset owners and operators do not make enhancements that would help mitigate vulnerabilities identified during security surveys and vulnerability assessments. As a result, DHS may be overlooking an opportunity to make improvements in the management of its voluntary risk mitigation programs that could also help DHS work with asset owners and operators to improve security and resilience. According to DHS, moving forward, it may consider changes to the types of information gathered as part of its effort to measure improvements, but it has not considered what additional information, if any, should be gathered from asset owners or operators that participate in security surveys and vulnerability assessments. According to the NIPP, the use of performance measures is a critical step in the risk management process to enable DHS to objectively and quantitatively assess improvement in CIKR protection and resiliency at the sector and national levels. The NIPP states that the use of performance metrics provides a basis for DHS to establish accountability, document actual performance, promote effective management, and provide a feedback mechanism to decision makers. Consistent with the NIPP, DHS has taken action to follow up with security survey participants to gather feedback from asset owners and operators that participated in the program regarding the effect these programs have had on asset security using a standardized data collection tool, hereafter referred to as the follow-up tool or tool. DHS first began to do follow-ups with asset owners and operators in May 2010 but suspended its follow-up activities shortly thereafter to make enhancements to the tool it used. In January 2011, IP introduced its revised follow-up tool, which was to be used by PSAs to ask asset representatives whose assets had undergone a security survey and received a dashboard about enhancements made in six general categories—information sharing, security management, security force, protective measures, physical security, and dependencies. Whereas the original follow-up tool focused on changes asset owners and operators made to enhance security and resilience, the revised tool focused on changes that were made directly as a result of DHS security surveys. According to DHS guidance, the tool was to be used 180 days after the completion of a security survey at an asset. The tool, which directs PSAs to ask a series of questions about improvements made as a result of the survey, instructs PSAs to request information on specific enhancements within those categories that were discussed in the dashboard provided to the asset owners and operators. For example, within the physical security category, the tool instructs the PSAs to ask about any enhancements to things like fences, gates, parking, lighting, and access control, among others, and to ask asset owners or operators whether an identified change was made as a result of the security survey the asset had received. In February 2011, shortly after the revised tool was introduced, IP issued guidelines that instructed PSAs to implement the follow-up tool. According to IP officials, PSAs used the tool to follow up with owners and operators of 610 assets from January 2011 through September 2011. Data provided by IP showed that about 21 percent (126 of the 610) of the respondents to the PSA follow-ups reported that they had completed improvements, and 81 percent of these (102 of 126) reported that those improvements were implemented as the result of the security survey the asset received. According to IP’s data, the most common types of improvements identified by assets that had completed improvements since receiving the security survey were changes to information sharing, which could include activities such as participating in working groups, and physical security. DHS guidance states that PSAs are to conduct a follow-up with the asset owners and operators 180 days after an asset receives a security survey. We compared DHS data on 522 security surveys conducted from July 1, 2010, through March 31, 2011, with DHS data on the follow-ups performed from January 1, 2011, through September 30, 2011—180 days after DHS completed the security surveys. We found that DHS did not contact some asset owners or operators that should have received a 180- day follow-up and contacted some owners and operators that had participated in a security survey more than 180 days prior to the introduction of the tool. For example, of the 522 security survey participants that participated in a security survey from July 1, 2010, through March 31, 2011, 208 (40 percent) received the 180-day follow-up and 314 (60 percent) did not. Furthermore, DHS recorded an additional 402 follow-ups on assets that had received their security survey more than 180 days prior to the introduction of the tool. Thus, the data DHS reported included improvements assets made beyond the 180-day scope of the follow-up tool, making it difficult to measure the effectiveness of the security survey in prompting enhancements within 180 days of the survey. According to PSCD officials, there are two key reasons why DHS used the follow-up tool to capture data on changes made beyond 180 days. First, program officials said that completion of the 180-day follow-up depends upon the asset representative’s willingness to participate and availability to answer these questions. If the asset representative does not agree to participate, or neither the representative nor the PSA is available, the 180-day follow-up cannot be completed on schedule. However, when DHS provided the follow-up data in November 2011, officials said that they were not aware of any asset owners or operators that had refused to participate in the 180-day follow-up at that time. Second, program officials noted that the inclusion of assets that had received a security survey more than 180 days prior to the introduction of the revised follow-up tool occurred because they believed that it was necessary to capture data on as many assets as possible. They said that IP intends that follow-ups be completed as close to the 180-day mark as possible, but they believed it was important to initially document whether the security survey resulted in changes to security, regardless of when the change was made. IP officials further explained that they had developed a similar follow-up tool to capture data on enhancements resulting from vulnerability assessments. However, at the time of our review, results were not available from the vulnerability assessment follow-up tool, which was also implemented in January 2011 and was designed to capture data on enhancements made 365 days following the delivery of the vulnerability assessment report. Consistent with the security survey, DHS officials explained that the 365-day follow-up for vulnerability assessments was determined as a means to begin the process of collecting and assessing data on improvements being made as a result of the assessments. They added that as more data are collected, IP will review the information to determine if the follow-up visits for security surveys and vulnerability assessments should remain at 180 and 365 days, respectively, or be moved as a result of information collected from asset owners and operators. Nonetheless, DHS officials did not provide a road map with time frames and milestones showing when they planned to revisit the 180-day follow-up time frame or the intervals between follow-ups. Consistent with the standards for project management, by having a road map with time frames and milestones for revisiting these time frames, IP could be better positioned to provide a more complete picture of its overall progress making these decisions and a basis for determining what, if any, additional actions need to be taken or data inputs need to be made. GAO/AIMD-00-21.3.1. especially true if asset owners and operators are implementing more complicated enhancements over a longer term because of the need to develop and fund plans for particular types of improvements. For example, gathering these data could help DHS measure not only what improvements asset operators are implementing, but also how long it takes to complete the planning phase of a security enhancement project and how this time frame might vary by the type of improvement. Furthermore, while it is important to capture information about improvements made as a result of these activities over time, it is also important that DHS either capture the information within the prescribed times outlined in DHS guidance, adjust the time frames based on an analysis of data captured over time, or perform follow-ups at additional intervals beyond those initially performed. This would also be consistent with Standards for Internal Control in the Federal Government, which calls for the establishment and review of performance measures and indicators to monitor activities and top-level reviews by management to track major agency achievements and compare these with plans, goals, and objectives. By doing so, IP could be better positioned to document actual performance, promote effective management, provide a feedback mechanism to decision makers, and enhance overall accountability. According to DHS officials, moving forward, DHS may consider additional changes to its follow-up tool depending on the results they gather over time. The NIPP states that performance measures that focus on outputs, called output measures, such as whether an asset completes a security improvement, should track the progression of a task. The NIPP further states that outcome measures are to track progress toward an intended goal by beneficial results rather than level of activity. Our review of DHS’s approach for following up with assets that had undergone a security survey showed that PSAs were instructed to focus on security enhancements completed as result of the security survey, not enhancements that were planned or in process. Nonetheless, our review of DHS’s follow-up results for the period from January through September 2011 showed that DHS reported the following: 41 percent (250 of 610) of the owners and operators surveyed reported that security enhancements were either in process or planned and the results did not indicate whether these planned or in-process enhancements were attributable to DHS’s security survey at these assets. After we discussed our observation with DHS officials, they informed us that they believe completed improvements are the best initial measurement of the impact of security surveys and vulnerability assessments. They added that other metrics can be added as the process matures and is refined. However, as of March 2012, DHS did not document whether planned or in-process improvements are the result of security surveys. Given that the NIPP calls for CIKR partners to measure performance in the context of the progression of the task, DHS could be missing an opportunity to measure performance associated with planned and in-process enhancements, especially if they are attributable to DHS efforts via security surveys and vulnerability assessments. DHS could also use this opportunity to consider how it can capture key information that could be used to understand why certain improvements were or were not made by assets owners and operators that have received surveys and assessments. For example, the follow-up tool could ask asset representatives what factors—such as cost, vulnerability, or perception of threat— influenced the decision to implement changes, either immediately or over time if they chose to make improvements; what factors—such as perception of risk, cost, or budget constraints— influenced an asset owner or operator to choose to not make any improvements; why were the improvements made chosen over other possible improvements, if improvements were made; and did the improvements, if any, involve the adoption of new or more cost-effective techniques that might be useful as an option for other owners and operators to consider as they explore the feasibility making improvements? Understanding why an asset owner or operator chooses to make, or not make, improvements to its security is valuable information for understanding the obstacles asset owners or operators face when making security investments. For example, the cost of security upgrades can be a barrier to making enhancements. As one PSA who responded to our survey commented, “there is no requirement for the owner/operator to take action. They are left with making a “risk-reward” decision. Some see great value in making security upgrades, while others are less inclined to make improvements due to costs.” Likewise, one asset representative told us that security is one of the most important things to management until budget time. In this regard, a better understanding of the complexity of the security improvement decision at the asset could also help DHS better understand the constraints asset owners or operators face in making these decisions—information that could possibly help DHS determine how, if at all, to refine its security survey program to assist asset owners or operators in making these decisions. For example, the NIPP states that effective CIKR programs and strategies seek to use resources efficiently by focusing on actions that offer the greatest mitigation of risk for any given expenditure. Additional information on the cost of improvements made and the reasons why improvements were or were not made could also assist DHS in understanding the trade-offs asset owners and operators face when making decisions to address vulnerabilities identified as a result of DHS security surveys and enhancements. IP officials told us they are wary of attempting to gather too much information from asset representatives with the follow-up tool because of a concern that being too intrusive may damage the relationships that the PSAs have established with asset representatives. They said that gathering additional information is not as important as maintaining strong relationships with the asset representatives. We recognize that DHS operates its security survey program in a voluntary environment and that DHS can only succeed at improving asset and sector security if asset owners and operators are willing to participate, consistent with DHS’s interest in maintaining good relationships with asset representatives. However, by gathering more information from assets that participate in these programs—particularly high-priority assets—DHS could be better positioned to measure the impact of its programs on critical infrastructure security at the sector and national levels. Moreover, by collecting and analyzing this type of information, DHS could be better informed in making decisions about whether adjustments to its voluntary programs are needed to make them more beneficial to CIKR assets—a factor which could help DHS further promote participation by asset owners and operators that may previously have been reluctant to participate in DHS security surveys and assessments. Having this type of information could also be important in light of DHS’s efforts to better understand interdependencies between assets via the RRAPs. For instance, by knowing what factors influence decisions to make an improvement, or not, at one asset or a group of assets, DHS could be better positioned to understand how that decision influences the security of other assets that are also part of the RRAP. As a result, DHS and PSAs could then be better positioned to work with owners and operators to mitigate any vulnerabilities arising out of these decisions. It could also help DHS develop and target strategies for addressing why certain enhancements were not made and ultimately put DHS in a better position to measure outcomes, rather than outputs, associated with its efforts to promote protection and resilience via its voluntary risk mitigation programs. DHS has taken important actions to conduct voluntary CIKR security surveys and vulnerability assessments, provide information to CIKR stakeholders, and assess the effectiveness of security surveys and vulnerability assessments. However, further actions could enhance each of these endeavors and provide DHS managers the information they need to ensure that IP is taking appropriate steps toward completing them or making adjustments where needed. DHS has not institutionalized realistic goals that could help DHS measure the effects of its efforts to promote and conduct security surveys and vulnerability assessments among high- priority assets. By developing realistic institutional goals, DHS could, for example, better measure the effects of its efforts to promote and conduct security surveys and vulnerability assessments among high-priority assets. Further, developing a road map with milestones and time frames for (1) taking and completing actions needed to resolve issues associated with data inconsistencies and matching data on the list of high-priority assets with data used to track the conduct of security surveys and vulnerability assessments, (2) completing protocols to systematically collect data on the reasons why some owners and operators declined to participate in the voluntary surveys and assessments, and (3) improving the timely delivery of the results of security surveys and vulnerability assessments could better position DHS to target high-priority assets and provide them with the information they need to make decisions related to security and resiliency. Moreover, by revising its plans to include when and how SSAs will be engaged in designing, testing, and implementing the web-based tool, consistent with its recent efforts to coordinate with CIKR partners, DHS could be positioned to better understand and address their information needs. Consistent with the NIPP, DHS is also continuing to take actions to follow up with asset owners and operators that have participated in security surveys and vulnerability assessments to gauge the extent to which these surveys and assessments have prompted owners and operators to improve security and resilience at their assets. DHS officials said that they intend to review the information it gathers from asset owners and operators to determine if the follow-up visits should remain at 180 days after DHS completed the security surveys. By establishing a road map with milestones and time frames for conducting this review, DHS would be better positioned to provide a picture of its overall progress in making these decisions and a basis for determining what, if any, additional actions need to be taken or data inputs need to be made and whether additional follow-ups are appropriate at intervals beyond the follow-ups initially performed. In addition, collecting detailed data on actions started and planned and, for example, why actions were not taken, could provide DHS valuable information on the decision-making process associated with making security enhancements and enable DHS to better understand what barriers owners and operators face in making improvements to the security of their assets. To better ensure that DHS’s efforts to promote security surveys and vulnerability assessments among high-priority CIKR are aligned with institutional goals, that the information gathered through these surveys and assessments meet the needs of stakeholders, and that DHS is positioned to know how these surveys and assessments could be improved, we recommend that the Assistant Secretary for Infrastructure Protection, Department of Homeland Security, take the following seven actions: develop plans with milestones and time frames to resolve issues associated with data inconsistencies and matching data on the list of high-priority assets with data used to track the conduct of security surveys and vulnerability assessments; institutionalize realistic performance goals for appropriate levels of participation in security surveys and vulnerability assessments by high-priority assets to measure how well DHS is achieving its goals; design and implement a mechanism for systematically assessing why owners and operators of high-priority assets decline to participate and a develop a road map, with time frames and milestones, for completing this effort; develop time frames and specific milestones for managing DHS’s efforts to ensure the timely delivery of the results of security surveys and vulnerability assessments to asset owners and operators; revise its plans to include when and how SSAs will be engaged in designing, testing, and implementing DHS’s web-based tool to address and mitigate any SSA concerns that may arise before the tool is finalized; develop a road map with time frames and specific milestones for reviewing the information it gathers from asset owners and operators to determine if follow-up visits should remain at 180 days for security surveys and whether additional follow-ups are appropriate at intervals beyond the follow-ups initially performed; and consider the feasibility of expanding the follow-up program to gather and act upon data, as appropriate, on (1) security enhancements that are ongoing and planned that are attributable to DHS security surveys and vulnerability assessments and (2) factors, such as cost and perceptions of threat, that influence asset owner and operator decisions to make, or not make, enhancements based on the results of DHS security surveys and vulnerability assessments. We provided a draft of this report to the Secretary of Homeland Security for review and comment. In its written comments reprinted in appendix IV, DHS agreed with all seven of the recommendations; however, its implementation plans do not fully address two of these seven recommendations and it is unclear to what extent its plans will address two other recommendations. With regard to the first recommendation that DHS develop plans to resolve issues associated with data inconsistencies between its databases, DHS stated its efforts to assign unique identifiers to assets on the high-priority list that have received security surveys and vulnerability assessments will make matching easier and that other quality assurance processes have been implemented to better verify individual asset data. We agree these are positive steps; however, to fully address the recommendation, we believe DHS should develop a plan with time frames and milestones that specify how the steps it says it is taking address the data inconsistencies we cited, and demonstrate the results—how many high-priority assets received security surveys, vulnerability assessments, or both in a given year—of that effort. By doing so, DHS would be better positioned to provide a more complete picture of its approach for developing and completing these tasks. It would also provide DHS managers and other decision makers with insights into (1) IP’s overall progress in completing these tasks and (2) a basis for determining what, if any, additional actions need to be taken. With regard to the second recommendation that DHS institutionalize realistic performance goals for levels of participation in security surveys and vulnerability assessments by high-priority assets, DHS stated that the participation of high-priority assets continues to be a concern but reiterated its view that the voluntary nature of its programs and competing priorities makes setting goals for high-priority participation difficult. DHS stated that its fiscal year 2012 Project Management Plans for Protective Security Advisor and Vulnerability Assessment Projects established realistic goals concerning the total number of assessments to be conducted. However, they said these plans do not include goals for assessments performed at high-priority assets. Furthermore, DHS stated the shift in emphasis to regional resilience suggested metrics and goals intended to measure the participation of high-priority assets in vulnerability assessments and surveys may not be a strong or accurate indicator of the degree to which DHS is achieving its infrastructure protection and resilience goals. We agree that the voluntary nature of these programs and changing priorities make the process of setting goals difficult. However, the NIPP and DHS guidance emphasize the importance of high-priority participation in these programs, and DHS can take factors like the voluntary nature of the program and DHS’s shift toward regional resilience into account when setting realistic goals for the number of security surveys and vulnerability assessments it conducts at high-priority facilities. By establishing realistic performance goals for levels of participation by high priority assets, DHS would be better positioned to compare actual performance against expected results and develop strategies for overcoming differences or adjust its goals to more realistically reflect the challenges it faces. With regard to the third recommendation that DHS design and implement a mechanism for systematically assessing why owners and operators of high priority assets decline to participate and develop a road map, with time frames and milestones, for completing this effort, DHS stated it recognizes that additional clarification and guidance are needed to ensure effective implementation of existing guidance. Specifically, DHS stated it will review and revise the guidance to (1) determine if revisions to the existing process are required and (2) develop supplementary guidance to aid PSAs in executing the process. DHS stated it will initiate this review in the fourth quarter of fiscal year 2012, after which time it will develop additional milestones for mechanism improvement. We believe that DHS’s proposed actions appear to be a step in the right direction, but it is too early to tell whether DHS’s actions will result in an improved mechanism for systematically assessing why owners and operators decline to participate. Regarding the fourth recommendation to develop time frames and specific milestones for managing its efforts to improve the timely delivery of the results of security surveys and vulnerability assessments to asset owners and operators, DHS stated it is working with contractors and program staff to advance the processes and protocols governing the delivery of assessment and survey products to facilities. DHS also stated that it had begun a review of assessments lacking delivery dates in LENS and is working with PSAs to populate the missing information. In addition, DHS noted that its plan to transition to a web-based dashboard system will help mitigate the issue of timely report delivery by eliminating the need for in-person delivery of the dashboard product. However, DHS did not discuss time frames and milestones for completing these efforts. Thus, it is unclear to what extent DHS’s actions will fully address this recommendation. As noted in our report, developing time frames and milestones for completing improvements that govern the delivery of the results of surveys and assessments would provide insights into IP’s overall progress. With regard to the fifth recommendation to revise its plans to include when and how SSAs will be engaged in designing, testing, and implementing DHS’s web-based tool, DHS stated that it is currently taking actions to develop and test a web-based dashboard tool for individual owners and operators, which is expected to be widely available in January 2013. DHS stated that it anticipates the development of a state and local “view,” or dashboard, following the successful deployment of the web-based owner and operator dashboards. Regarding SSAs, DHS stated that a concept for a sector-level view of assessment data has been proposed and that the requirements and feasibility of such a dashboard will be explored more fully following the completion of the state-level web- based dashboard. DHS noted that that IP will engage the SSAs to determine any associated requirements. DHS’s proposed actions appear to be a step in the right direction. However, given that the sector level view of assessment data is in the proposal stage and further action will be explored more fully after completion of the state level web-based dashboard, it is too early to tell when and how SSA’s will be engaged in designing, testing and implementing the web-based tool. In response to the sixth recommendation to develop a road map with time frames and specific milestones to determine if follow-up visits should remain at 180 days for security surveys, and whether additional follow- ups are appropriate at intervals beyond the follow-ups initially performed, DHS stated it will analyze and compare security survey follow-up results in early calendar year 2013 to determine whether modifications are required. DHS also stated that given that the 365-day follow-up process went into effect in January 2011, the first follow-up evaluations of vulnerability assessments have only recently begun and IP will collect, at a minimum, 1 year of vulnerability assessment data. DHS said that IP intends to review the results for both the security survey 180-day follow- up and the 365-day follow-up in early calendar year 2013 to determine whether modifications to the follow-up intervals are required. DHS’s proposed actions are consistent with the intent of this recommendation. In response to the seventh recommendation to consider the feasibility of gathering and acting upon additional data, where appropriate, on (1) ongoing or planned enhancements attributable to security surveys and assessments and (2) factors that influence asset owner and operator decisions to make or not make security enhancements, DHS stated that it collects information on ongoing or planned enhancements. However, as noted in the report, DHS does not collect information that would show whether these enhancements are attributable to security surveys and assessments. DHS also stated that IP will continue to work with Argonne National Laboratory and field personnel to determine the best method for collecting information related to those factors influencing an asset’s decision to implement or not implement a new protective measure or security enhancement. However, it is not clear to what extent DHS’s actions will fully address this recommendation because it did not discuss whether it will consider the feasibility of gathering data on whether ongoing or planned enhancements are attributable to security surveys and assessments or how it will act upon the data it currently gathers or plans to gather to, among other things, measure performance in the context of the progression of the task, consistent with the NIPP. By gathering and analyzing data on why an asset owner or operator chooses to make, or not make, improvement to security, DHS would be better positioned to understand the obstacles asset owners face when making investments. DHS also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Homeland Security, the Under Secretary for the National Protection Programs Directorate, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-8777 or caldwells@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. This appendix provides information on the 18 critical infrastructure sectors and the federal agencies responsible for sector security. The National Infrastructure Protection Plan (NIPP) outlines the roles and responsibilities of the Department of Homeland Security (DHS) and its partners—including other federal agencies. Within the NIPP framework, DHS is responsible for leading and coordinating the overall national effort to enhance protection via 18 critical infrastructure and key resources (CIKR) sectors. Homeland Security Presidential Directive (HSPD) 7 and the NIPP assign responsibility for CIKR sectors to sector-specific agencies (SSA). As an SSA, DHS has direct responsibility for leading, integrating, and coordinating efforts of sector partners to protect 11 of the 18 CIKR sectors. The remaining sectors are coordinated by eight other federal agencies. Table 2 lists the SSAs and their sectors. To meet our first objective—determine the extent to which DHS has taken action to conduct security surveys and vulnerability assessments among high-priority CIKR—we reviewed DHS guidelines on the promotion and implementation of the security surveys and vulnerability assessments, records of outreach to CIKR stakeholders regarding these tools, and various DHS documents, including DHS’s National Critical Infrastructure and Key Resources Protection Annual Report, on efforts to complete security surveys and vulnerability assessments. We also interviewed officials in the Protective Security Coordination Division, which is part of the Office of Infrastructure Protection (IP) in DHS’s National Protection and Program Directorate, who are responsible for managing and administering DHS’s security surveys and vulnerability assessments to learn about the actions they took to conduct these programs. We obtained and analyzed DHS data on the conduct of voluntary programs for fiscal years 2009 through 2011—which are maintained in DHS’s Link Encrypted Network System (LENS) database and compared those records with the National Critical Infrastructure Prioritization Program (NCIPP) list of the high-priority CIKR assets—to determine the extent to which DHS performed security surveys and vulnerability assessments at high-priority assets. To assess the reliability of the data, we reviewed existing documentation about the data systems and spoke with knowledgeable agency officials responsible for matching the two databases to discuss the results of our comparison and to learn about their efforts to match LENS data with the NCIPP lists. While the information in each database was sufficiently reliable for the purposes of providing a general overview of the program, issues with the comparability of information in each database exist, which are discussed in this report. To do our comparison, we used a Statistical Analysis System (SAS) program to match the different data sets and summarize the results. Because we found that assets in the LENS database and NCIPP lists did not share common formats or identifiers that allowed us to easily match them, we had to match the data based on asset names and addresses. However, names and addresses were generally not entered in a standardized way, so we had to develop a process to standardize the available information and identify potential matches based on similar names or addresses. In our attempt to match the data sets, we did the following: Standardized the date formats for fields that tracked when assessments were conducted (dates across lists might have formats such as 01/01/10 or 1/1/2010 and needed to be standardized to ensure appropriate matching within certain time frames). Standardized the labels for sectors (across data sets, a sector might be listed as Chemical & Hazardous Materials Industry, Chemical and Hazardous Materials Indus, or ‘Chemical’). Standardized state fields (across data sets, a state might be listed as Alabama or AL). Identified exact matches between the data sets on the asset name and the state name. Identified potential matches between the data sets based on asset name, asset address, and state. Specifically, we used a SAS function (SPEDIS) that measures asymmetric spelling distance between words, to determine the likelihood that names and addresses from two data sets did match and to generate possible pairs of matching assets. The possible matches for an asset were written to a spreadsheet, which we reviewed to determine a potential match. As noted in the report, the inconsistencies between the data sets prevented us from determining definitively the extent to which assets on one list were also present in the other list. For example, in some cases assets seemed to be potential matches but there were differences in the sector listed or inconsistent company names and addresses. Thus we report separately on assets that were exact matches based on asset name and those that were potential matches. We also examined the inconsistencies we found with respect to DHS’s guidance on gathering data on participation in the security survey and vulnerability assessments and compared the findings to the criteria in Standards for Internal Control in the Federal Government. We also compared the results of our analyses with GAO reports on performance measurement, including ways to use program data to measure results. In addition, to address the first objective, we also interviewed representatives—asset owners and operators—at 10 selected assets, also known as facilities, in 4 of the 18 sectors—the water, dams, commercial facilities, and energy sectors—to discuss their views on DHS efforts to work with asset owners and operators and conduct DHS’s voluntary security surveys and vulnerability assessments. We also contacted industry association representatives from the 4 sectors to discuss their views on DHS efforts to promote and conduct these activities. We selected these asset and industry representatives to take into account (1) sectors with a mix of regulations related to security; (2) sectors where DHS’s IP and non-DHS agencies are the SSAs—DHS for the commercial facilities sector and dams sector, the Department of Energy for the energy sector, and the Environmental Protection Agency for the water sector; (3) sectors where security surveys and vulnerability assessments had been conducted; and (4) geographic dispersion. We selected three states—California, New Jersey, and Virginia—where, based on our preliminary review of DHS’s LENS database and the NCIPP lists, security surveys and vulnerabilities assessments may have been performed at high-priority assets. At these assets, we, among other things, focused on the role of protective security advisors (PSA) who serve as liaisons between DHS and security stakeholders, including asset owners and operators, in local communities. We also reviewed PSA program guidance and interviewed 4 of 88 PSAs—PSAs from California, New Jersey and from the National Capital Region (encompassing Washington, D.C., suburban Virginia, and suburban Maryland)—to discuss the roles and responsibilities in partnering with asset owners and operators and in promoting security surveys and vulnerability assessments. While the results of our interviews cannot be generalized to reflect the views of all asset owners and operators and PSAs nationwide, the information obtained provided insights into DHS efforts to promote participation in its security survey and vulnerability assessment programs. We also conducted a survey of 83 of 88 PSAs, those who, based on lists provided by DHS officials, had been in their positions for at least 1 year. We conducted the survey to gather information on PSAs’ efforts to promote and implement security surveys and vulnerability assessments, and identify challenges PSAs face when conducting these. GAO staff familiar with the critical infrastructure protection subject matter designed draft questionnaires in close collaboration with a social science survey specialist. We conducted pretests with three PSAs to help further refine our questions, develop new questions, clarify any ambiguous portions of the survey, and identify any potentially biased questions. We launched our web-based survey on October 3, 2011, and received all responses by November 18, 2011. Log-in information for the web-based survey was e- mailed to all participants. We sent one follow-up e-mail message to all nonrespondents 2 weeks later and received responses from 80 out of 83 PSAs surveyed (96 percent). Because the survey was conducted with all eligible PSAs, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of people who do not respond can introduce unwanted variability into the survey results. We included steps in both the data collection and data analysis stages to minimize such nonsampling errors. We collaborated with a GAO social science survey specialist to design draft questionnaires, and versions of the questionnaire were pretested with 3 PSAs. In addition, we provided a draft of the questionnaire to DHS’s IP for review and comment. From these pretests and reviews, we made revisions as necessary. We examined the survey results and performed computer analyses to identify inconsistencies and other indications of error. A second independent analyst checked the accuracy of all computer analyses. Regarding our second objective—to determine the extent to which DHS shared the results of security surveys and vulnerability assessments with asset owners and operators and SSAs—we reviewed available DHS guidelines and reports on efforts to share security survey and vulnerability assessment results with stakeholders and compared DHS’s sharing of information with standards in the NIPP. We accessed, downloaded, and analyzed LENS data for information regarding the asset owners and operators that participated in DHS security surveys and vulnerability assessments during fiscal years 2009 through 2011. To assess the reliability of the data, we spoke with knowledgeable agency officials about their quality assurance process. During the course of our review DHS began taking action to clean up the data and address some of the data inconsistencies we discuss in this report. We found the data to be sufficiently reliable for providing a general overview of the program, but issues with the missing information in the LENS database exist and are discussed in this report. We compared the results of our analysis with DHS criteria regarding the timeliness of security surveys and vulnerability assessments, criteria in Standards for Internal Control in the Federal Government, and the NIPP. We also used the LENS database, the NCIPP lists, and DHS documentation showing all assets that had received a security survey or a vulnerability assessment to select a nonrandom sample of high-priority assets from 4 sectors—the commercial facilities, dams, energy, and water sectors—and spoke with representatives from these selected assets to garner their opinions on the value of these voluntary programs and how they used the information DHS shared with them. In addition, we reviewed the 2009 and 2010 sector annual reports and the 2010 sector- specific plans for all CIKR sectors to assess if and how results of the security surveys and vulnerability assessments were included. We also interviewed SSA officials from our 4 selected sectors to learn what information DHS shared with them and how that information was used, and to discuss their overall relationship with DHS with respect to receiving and using data from DHS security surveys and vulnerability assessments. While the results of these interviews cannot be generalized to all SSAs, the results provided us with valuable insight into the dissemination and usefulness of information DHS provided from security surveys and vulnerability assessments. Furthermore, we interviewed DHS officials regarding their efforts to enhance the information they provide to SSAs from security surveys and vulnerability assessments. With regard to our third objective—determine the extent to which DHS assessed the effectiveness of the security survey and vulnerability assessment programs, including any action needed to improve DHS’s management of the programs—we reviewed DHS documents and our past reports, and DHS Office of Inspector General (OIG) reports on DHS efforts to assess the effectiveness of its programs. We interviewed DHS officials and reviewed DHS guidelines on procedures for following up with asset owners and operators that have participated in these programs and to discuss the results of DHS efforts to conduct these follow-ups. We also (1) examined DHS documents that discussed the results of DHS efforts to conduct follow-ups and (2) analyzed the instrument used to contact owners and operators, as well as the questions asked to assess its effectiveness. In addition, we analyzed available data on DHS efforts to perform follow-ups for the period from January 2011 through September 30, 2011, and compared DHS data with DHS guidelines that discussed the number of days DHS officials were to begin follow-ups after providing the results of security surveys and vulnerabilities to asset owners and operators. We also compared the results or our work with criteria in Standards for Internal Control in the Federal Government and the NIPP, particularly those related to performance measurement. Finally, we spoke to CIKR officials in our sample sectors to learn how DHS personnel in the field had followed up on security surveys and vulnerability assessments and whether asset owners and operators were making changes based on the results, and if not why. We conducted this performance audit from June 2011 through May 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides information on our survey of Protective Security Advisors, which we used to gather information on efforts to promote and implement the voluntary programs offered by DHS and the challenges faced when conducting security surveys and vulnerability assessments. We conducted a Web-based survey of all 83 Protective Security Advisors who had been in their positions for at least one year. We received responses from 80, for a response rate of 96 percent. Our survey was composed of closed- and open-ended questions. In this appendix, we include all the survey questions and aggregate results of responses to the closed-ended questions; we do not provide information on responses provided to the open-ended questions. Percentages may not total to 100 due to rounding. For a more detailed discussion of our survey methodology, see appendix II. 1. Please provide the following information about the Protective Security Advisor responsible for completing this questionnaire. Number of years as a PSA (Round up to nearest year) 2. Did you receive the Enhanced Critical Infrastructure Protection (ECIP) Initiative Standard Operating Procedures (SOP) guidance dated February 2011? 80 3. (If yes to Q2) How useful did you find the ECIP SOP guidance for promoting ECIPs? If you answered "slightly useful" or "not at all useful", please explain why: 4. (If yes to Q2) How useful did you find the ECIP SOP guidance for conducting ECIPs? If you answered "slightly useful" or "not at all useful", please explain why: 5. Did you receive training on the Enhanced Critical Infrastructure Protection (ECIP) Initiative program? 6. (If yes to Q5) How useful did you find the ECIP training? If you answered "slightly useful" or "not at all useful", please explain why: 7. In your opinion, how useful is the ECIP Initiative program for reducing risk at CI facilities? Please explain your opinion about the usefulness of the ECIP Initiative program: 8. In your opinion, how useful is the ECIP Infrastructure Survey Tool (IST) for reducing risk at CI facilities? Please explain your opinion about the usefulness of the ECIP IST: 9. In your opinion, how useful is the ECIP Facility Dashboard for reducing risk at CI facilities? 10. How often have you heard each of the following reasons from facilities who declined to participate in an ECIP site visit? (Select one answer in each row.) a. The facility does not want to participate in additional facility assessments because it is already subject to Federal or State regulation/inspection. b. The facility does not have time or resources to participate. 80 c. Facility owners and operators are not willing to sign Protected Critical Infrastructure Information Express statements due to legal concerns over the protection and dissemination of the data collected. d. The entity that owns/oversees the facility declines to participate as a matter of policy. e. Facility owners and operators have a diminished perception of threat against the facility. f. The facility already received a risk assessment through a private company and participation in the voluntary assessment would be redundant or duplicative. g. Identification of security gaps may render the owner of the facility liable for damages should an incident occur. What other reasons, if any, have you heard for facilities declining ECIP site visits? 11. Have you found that higher priority facilities (Level 1 or 2) are more or less likely to participate in ECIP site visits than lower priority facilities? 80 12. If you answered somewhat less likely or much less likely, what do you see as the reasons for the lower participation by the higher priority facilities? 13. What factors do you believe are important to facilities considering participating in an ECIP site visit? 14. How often have you heard each of the following reasons from facilities who declined to participate in an ECIP IST? (Select one answer in each row.) a. The facility does not want to participate in additional facility assessments because it is already subject to Federal or State regulation/inspection. b. The facility does not have time or resources to participate. c. Facility owners and operators are not willing to sign Protected Critical Infrastructure Information Express statements due to legal concerns over the protection and dissemination of the data collected. d. The entity that owns/oversees the facility declines to participate as a matter of policy. e. Facility owners and operators have a diminished perception of threat against the facility. 79 f. The facility already received a risk assessment through a private company and participation in the voluntary assessment would be redundant or duplicative. g. Identification of security gaps may render the owner of the facility liable for damages should an incident occur. h. Facility's security program is not yet mature enough to benefit from participation. What other reasons, if any, have you heard for facilities declining to participate in an ECIP IST? 15. Have you found that higher priority facilities (Level 1 or 2) are more or less likely to participate in ECIP ISTs than lower priority facilities? 16. If you answered somewhat less likely or much less likely, what do you see as the reasons for the lower participation by the higher priority facilities? 17. How much of an incentive do you believe each of the following are for encouraging participation in an ECIP IST? (Select one answer in each row.) No incentive Don't know d. Appeal to public service (patriotic duty) If you responded not applicable to any of the sectors above, please explain. 20. Are you aware of any factors that drive differing levels of participation in the voluntary ECIP Initiative program by sector? Please explain. 21. In your opinion, how useful are SAVs as a tool for reducing risk at CI facilities? Please explain your opinion about the usefulness of SAVs: 22. How often have you heard each of the following reasons from facilities who declined to participate in a SAV? (Select one answer in each row.) a. The facility does not want to participate in additional facility assessments because it is already subject to Federal or State regulation/inspection. b. The facility does not have time or resources to participate. c. Facility owners and operators are not willing to sign Protected Critical Infrastructure Information Express statements due to legal concerns over the protection and dissemination of the data collected. d. The entity that owns/oversees the facility declines to participate as a matter of policy. 80 e. Facility owners and operators have a diminished perception of threat against the facility. f. The facility already received a risk assessment through a private company and participation in the voluntary assessment would be redundant or duplicative. g. Identification of security gaps may render the owner of the facility liable for damages should an incident occur. h. Facility's security program is not yet mature enough to benefit from participation. If you responded not applicable to any of the sectors above, please explain. 28. Are you aware of any factors that drive differing levels of participation in the voluntary SAV program by sector? Please explain. 29. What challenges, if any, do you face when implementing voluntary CI protection programs associated with ECIPs and SAVs? 30. Are you ready to submit your final completed survey to GAO? (This is equivalent to mailing a completed paper survey to us. It tells us that your answers are official and final.) No, my survey is not yet complete - To submit your final responses, please click on "Exit" below" save your responses for later, please click on "Exit" below" You may view and print your completed survey by clicking on the Summary link in the menu to the left. In addition to the contact named above, John F. Mortin, Assistant Director, and Anthony DeFrank, Analyst-in-Charge, managed this assignment. Andrew M. Curry, Katherine M. Davis, Michele C. Fejfar, Lisa L. Fisher, Mitchell B. Karpman, Thomas F. Lombardi, and Mona E. Nichols-Blake made significant contributions to the work. Critical Infrastructure Protection: DHS Has Taken Action Designed to Identify and Address Overlaps and Gaps in Critical Infrastructure Security Activities. GAO-11-537R. Washington, D.C.: May 19, 2011. Critical Infrastructure Protection: DHS Efforts to Assess and Promote Resiliency Are Evolving but Program Management Could Be Strengthened. GAO-10-772. Washington, D.C.: September 23, 2010. Critical Infrastructure Protection: Update to National Infrastructure Protection Plan Includes Increased Emphasis on Risk Management and Resilience. GAO-10-296. Washington, D.C.: March 5, 2010. The Department of Homeland Security’s (DHS) Critical Infrastructure Protection Cost-Benefit Report. GAO-09-654R. Washington, D.C.: June 26, 2009. Information Technology: Federal Laws, Regulations, and Mandatory Standards to Securing Private Sector Information Technology Systems and Data in Critical Infrastructure Sectors. GAO-08-1075R. Washington, D.C.: September 16, 2008. Risk Management: Strengthening the Use of Risk Management Principles in Homeland Security. GAO-08-904T. Washington, D.C.: June 25, 2008. Critical Infrastructure: Sector Plans Complete and Sector Councils Evolving. GAO-07-1075T. Washington, D.C.: July 12, 2007. Critical Infrastructure Protection: Sector Plans and Sector Councils Continue to Evolve. GAO-07-706R. Washington, D.C.: July 10, 2007. Critical Infrastructure: Challenges Remain in Protecting Key Sectors. GAO-07-626T. Washington, D.C.: March 20, 2007. Homeland Security: Progress Has Been Made to Address the Vulnerabilities Exposed by 9/11, but Continued Federal Action Is Needed to Further Mitigate Security Risks. GAO-07-375. Washington, D.C.: January 24, 2007. Critical Infrastructure Protection: Progress Coordinating Government and Private Sector Efforts Varies by Sectors’ Characteristics. GAO-07-39. Washington, D.C.: October 16, 2006. Information Sharing: DHS Should Take Steps to Encourage More Widespread Use of Its Program to Protect and Share Critical Infrastructure Information. GAO-06-383. Washington, D.C.: April 17, 2006. Risk Management: Further Refinements Needed to Assess Risks and Prioritize Protective Measures at Ports and Other Critical Infrastructure. GAO-06-91. Washington, D.C.: December 15, 2005. Protection of Chemical and Water Infrastructure: Federal Requirements, Actions of Selected Facilities, and Remaining Challenges. GAO-05-327. Washington, D.C.: March 28, 2005. Homeland Security: Agency Plans, Implementation, and Challenges Regarding the National Strategy for Homeland Security. GAO-05-33. Washington, D.C.: January 14, 2005.
Natural disasters, such as Hurricane Katrina, and terrorist attacks, such as the 2005 bombings in London, highlight the importance of protecting CIKR—assets and systems vital to the economy or health of the nation. DHS issued the NIPP in June 2006 (updated in 2009) to provide the approach for integrating the nation’s CIKR. Because the private sector owns most of the nation’s CIKR—for example, energy production facilities—DHS encourages asset owners and operators to voluntarily participate in surveys or vulnerability assessments of existing security measures at those assets. This includes nationally significant CIKR that DHS designates as high priority. In response to a request, this report assesses the extent to which DHS has (1) taken action to conduct surveys and assessments among high–priority CIKR, (2) shared the results of these surveys and assessments with asset owners or operators, and (3) assessed the effectiveness of surveys and assessments and identified actions taken, if any, to improve them. GAO, among other things, reviewed laws, analyzed data identifying high-priority assets and activities performed from fiscal years 2009 through 2011, and interviewed DHS officials. The Department of Homeland Security (DHS) has conducted about 2,800 security surveys and vulnerability assessments on critical infrastructure and key resources (CIKR). DHS directs its protective security advisors to contact owners and operators of high-priority CIKR to offer to conduct surveys and assessments. However, DHS is not positioned to track the extent to which these are performed at high-priority CIKR because of inconsistencies between the databases used to identify these assets and those used to identify surveys and assessments conducted. GAO compared the two databases and found that of the 2,195 security surveys and 655 vulnerability assessments conducted for fiscal years 2009 through 2011, 135 surveys and 44 assessments matched and another 106 surveys and 23 assessments were potential matches for high-priority facilities. GAO could not match additional high-priority facilities because of inconsistencies in the way data were recorded in the two databases, for example, assets with the same company name had different addresses or an asset at one address had different names. DHS officials acknowledged that the data did not match and have begun to take actions to improve the collection and organization of the data. However, DHS does not have milestones and timelines for completing these efforts consistent with standards for project management. By developing a plan with time frames and milestones consistent with these standards DHS would be better positioned to provide a more complete picture of its progress. DHS shares the results of security surveys and vulnerability assessments with asset owners or operators but faces challenges doing so. A GAO analysis of DHS data from fiscal year 2011 showed that DHS was late meeting its (1) 30-day time frame—as required by DHS guidance—for delivering the results of its security surveys 60 percent of the time and (2) 60-day time frame—expected by DHS managers for delivering the results of its vulnerability assessments—in 84 percent of the instances. DHS officials acknowledged the late delivery of survey and assessment results and said they are working to improve processes and protocols. However, DHS has not established a plan with time frames and milestones for managing this effort consistent with the standards for project management. Also, the National Infrastructure Protection Plan (NIPP), which emphasizes partnering and voluntary information sharing, states that CIKR partners need to be provided with timely and relevant information that they can use to make decisions. Developing a plan with time frames and milestones for improving timeliness could help DHS provide asset owners and operators with the timely information they need to consider security enhancements. DHS uses a follow-up tool to assess the results of security surveys and assessments performed at CIKR assets, and are considering upgrades to the tool. However, DHS could better measure results and improve program management by capturing additional information. For example, key information, such as why certain improvements were or were not made by asset owners and operators that have received security surveys, could help DHS improve its efforts. Further, information on barriers to making improvements—such as the cost of security enhancements—could help DHS better understand asset owners and operators’ rationale in making decisions and thereby help improve its programs. Taking steps to gather additional information could help keep DHS better informed for making decisions in managing its programs. GAO recommends that, among other things, DHS develop plans for its efforts to improve the collection and organization of data and the timeliness of survey and assessment results, and gather and act upon additional information from asset owners and operators about why improvements were or were not made. DHS concurred with the recommendations.
Long-term care includes services provided to individuals who have a cognitive impairment or who, because of illness or disability, are unable to perform certain activities of daily living (ADL)—such as bathing, dressing, and eating—for an extended period of time. These services may be provided in various settings, such as nursing facilities, an individual’s home, or the community. Long-term care can be expensive, especially when provided in nursing facilities. In 2006, the average cost of a year of nursing facility care in a private room was about $75,000. The average hourly rate for a home health aide in that same year was $19; as a result, 10 hours of such care a week would average close to $10,000 a year. LTCI helps pay for the costs associated with long-term care services. Individuals can purchase LTCI policies from insurance companies or through employers or other groups. As of 2002, individual policies represented approximately 80 percent of the market, with policies purchased through employers representing most of the remaining 20 percent. The average age of consumers purchasing individual policies has decreased over time from an average age of 68 in 1990 to 61 in 2005. The number of LTCI policies sold has been relatively small—about 9 million as of the end of 2002, the most recent year of data available— with less than 10 percent of people aged 50 and older purchasing LTCI in the majority of states. Companies generally structure their LTCI policies around certain types of benefits and related options. A policy with comprehensive coverage pays for long-term care in nursing facilities as well as for care in home and community settings, while other policies may only provide coverage for care in one setting. While 63 percent of policies sold in 1990 covered care in nursing facilities only, over time there has been a shift to comprehensive policies, which represented 90 percent of policies sold in 2005. A daily benefit amount specifies the amount a policy will pay on a daily basis toward the cost of care, while a benefit period specifies the overall length of time a policy will pay for care. Data on policies sold in 1995, 2000, and 2005 show that maximum daily benefits range from less than $30 to well over $100 per day, while benefit periods can range from 1 year to lifetime coverage. A policy’s elimination period establishes the length of time a policyholder who has begun to receive long-term care has to wait before his or her insurance will begin making payments toward the cost of care. For policies sold in 2005, the elimination period was generally from 1 to 3 months. Inflation protection increases the maximum daily benefit amount covered by the policy and helps ensure that over time the daily benefit remains commensurate with the costs of care. Data from 2005 show that over three-quarters of consumers that year chose some form of inflation protection, up from less than half in 2000. To receive benefits claimed under an LTCI policy, the consumer must not only obtain the covered services, but must also meet what are commonly referred to as benefit triggers. Most policies provide benefits under two circumstances (1) the consumer has a specified degree of functional disability, that is, he or she cannot perform a certain number of ADLs without assistance, or (2) the consumer requires supervision because of a cognitive impairment, such as Alzheimer’s. In addition, benefit payments do not begin until the policyholder has met the benefit triggers for the length of the elimination period, such as 30 or 90 days. Determining whether a consumer has met the benefit triggers to begin receiving claimed benefits can be complex and companies’ processes for doing so vary. Some companies rely on physician notes and claim forms. Others use a structured, in-person assessment conducted by a licensed health care practitioner, such as a registered nurse. To prove that the care received is covered and the consumer meets the eligibility criteria, consumers or those acting on their behalf must provide several types of documentation, such as a plan of care written by a licensed practitioner outlining the services that are appropriate and required to address the claimant’s conditions and an itemized bill for the care provided. Ensuring that services are covered and the consumer is eligible to receive benefits is important for LTCI companies, as the average claim amount for LTCI tends to be high given that benefits are for an extended period of time, often beyond a year. In the event that a consumer’s claim for benefits is denied, the consumer generally can appeal to the insurance company to reconsider the determination. If the company upholds the determination, the consumer can file a complaint with the state insurance department or can seek adjudication through the courts. Many factors affect LTCI premium rates, including the benefits covered and the age and health status of the applicant. For example, companies typically charge higher premiums for comprehensive coverage as compared to policies without such coverage, and consumers pay higher premiums the higher the daily benefit amount, the greater the inflation protection, and the shorter the elimination period. Similarly, premiums typically are more expensive the older the policyholder is at the time of purchase. For example, in California, a 55-year-old purchasing one company’s 3-year, $100 per day comprehensive coverage policy in 2007 would pay about $2,200 per year, whereas a 70-year-old purchasing the same policy would pay about $3,900 per year. Company assumptions about interest rates on invested assets, mortality rates, morbidity rates, and lapse rates—the number of people expected to drop their policies over time—also affect premium rates. A key feature of LTCI is that premium rates are designed—though not guaranteed—to remain level over time. Companies calculate premium rates to ensure that the total premiums paid by all consumers who bought a given policy and the interest earned on invested assets over the lifetime of the policy are sufficient to cover costs. While under most states’ laws insurance companies cannot increase premiums for a single consumer because of individual circumstances, such as age or health, companies can increase premiums for entire classes of individuals, such as all consumers with the same policy, if new data indicate that expected claims payments will exceed the class’s accumulated premiums and expected investment returns. Setting LTCI premium rates at an adequate level to cover future costs has been a challenge for some companies. Because LTCI is a relatively new product, companies lacked and may continue to lack sufficient data to accurately estimate the revenue needed to cover costs. For example, according to industry experts, lapse rates, which companies initially based on experience with other insurance products, have proven lower than companies anticipated in initial pricing, which increased the number of people likely to submit claims. As a result, many policies were priced too low and subsequently premiums had to be increased, leading some consumers to cancel coverage. As companies adjust their pricing assumptions, for example, lowering the lapse rates assumed in pricing, initial premiums may be higher but the likelihood of future rate increases may also be reduced. Oversight of the LTCI industry is largely the responsibility of states. Through laws and regulations, states establish standards governing LTCI and give state insurance departments the authority to enforce those standards. Many states’ laws and regulations reflect standards set out in model laws and regulations developed by NAIC. These models are intended to assist states in formulating their laws and policies to regulate insurance, but states can choose to adopt them or not. In 1986 NAIC adopted the Long-Term Care Insurance Model Act and subsequently in 1987 the Long-Term Care Insurance Model Regulation, models which suggest the minimum standards states should adopt for regulating LTCI. In addition to the LTCI models, other NAIC insurance models, for example, the Unfair Life, Accident, and Health Claims Settlement Practices Model Regulation, address unfair claims settlement practices across multiple lines of insurance, including LTCI. NAIC has revised its models over time to address emerging issues in the industry, including revisions made to its LTCI model regulation in 2000 designed to improve rate stability. Beyond implementing pertinent laws and regulations, state regulators perform a variety of oversight tasks that are intended to protect consumers from unfair practices. These activities include reviewing policy rates and forms, conducting market conduct examinations, and responding to consumer complaints. In reviewing rates and forms, state regulators examine a policy’s price, terms, and conditions to ensure that they are consistent with state laws and regulations. This includes reviewing the company’s pricing assumptions, such as lapse rates. Some states allow companies to begin selling policies before receiving approval for price and policy terms, while others require prior approval before policies can be sold. A small number of states do not require companies to submit rates for review. When conducting a market conduct examination, an examiner visits a company to evaluate practices and procedures, such as claims settlement practices, and checks those practices and procedures against information in the company’s files. Consumer complaints generally lead states to request information from the company in question. The state reviews the company’s response for consistency with the policy contract and for violations of insurance laws and regulations. Although oversight of the LTCI industry is largely the responsibility of states, the federal government also plays a role in the oversight of LTCI. HIPAA established federal standards that affect the LTCI industry as well as consumers purchasing policies by specifying conditions under which LTCI benefits and premiums would receive favorable federal income tax treatment. Under HIPAA, a tax-qualified policy must cover individuals certified as needing substantial assistance with at least two of the six ADLs for at least 90 days due to a loss of functional capacity, having a similar level of disability, or requiring substantial supervision because of a severe cognitive impairment. Tax-qualified policies under HIPAA must also comply with certain provisions of the NAIC LTCI model act and regulation in effect as of January 1993. For example, tax-qualified LTCI policies must include an offer of inflation protection. The Department of the Treasury, specifically IRS, issued regulations in 1998 implementing some of the HIPAA standards. Under the law and regulations, a policy is tax qualified if it complies with a state law that is the same or more stringent than the analogous federal requirement. According to IRS officials, the agency generally relies on states to ensure that policies marketed as tax qualified meet HIPAA requirements. In 2002, 90 percent of LTCI policies sold were marketed as tax qualified. The same consumer protections established under HIPAA for tax-qualified policies were included in DRA for Partnership policies. However, DRA provides for certain additional consumer protections to be included in Partnership policies. For example, states establishing Partnership programs must ensure that issuers of Partnership policies develop and use suitability standards consistent with the NAIC models. These standards are intended to determine whether LTCI is appropriate for each consumer considering purchasing a policy. Although CMS is responsible for approving the amendments to states’ Medicaid plans required to implement long-term care Partnership programs, state insurance departments are responsible for certifying that Partnership policies comply with DRA standards. As of February 2008, 18 states had received CMS approval to begin Partnership programs subject to DRA standards, of which 8 had begun certifying policies. Partnership policies must also comply with state laws and regulations. States are responsible for reviewing Partnership policy forms and rates and overseeing claims settlement practices for companies that issue these policies. In addition to the responsibilities of CMS and IRS in the federal government, OPM has oversight responsibility for the FLTCIP. As of March 2008, the federal program included nearly 220,000 enrollees. The contractor that administers the program must comply with provisions of the 2000 version of the NAIC LTCI models, such as the requirement that consumers be offered certain options in the event of a large rate increase. Policies sold under the federal program are not required to meet state insurance laws and regulations. In recent years, many states have made efforts to improve oversight of rate setting, though some consumers remain more likely to experience rate increases than others. Since 2000, NAIC estimates that more than half of all states have adopted new rate setting standards. States that adopted new standards generally moved from a single standard focused on ensuring that rates were not set too high to more comprehensive standards designed primarily to enhance rate stability and provide increased protections for consumers. The more comprehensive standards were based on changes made to NAIC’s LTCI model regulation in 2000. While regulators in most of the 10 states we reviewed told us that they expect these more comprehensive standards will be successful, they noted that more time is needed to know how well the standards will work. Regulators from the states in our review also use other standards or practices to oversee rate setting, several of which are intended to keep premium rates more stable. Despite states implementing more comprehensive standards and using other oversight efforts intended to enhance rate stability, some consumers may remain more likely to experience rate increases than others. Specifically, consumers may face more risk of a rate increase depending on when they purchased their policy, from which company their policy was purchased, and which state is reviewing a proposed rate increase on their policy. Since 2000, NAIC estimates that more than half of states nationwide have adopted new rate setting standards for LTCI. States that adopted new standards generally moved from the use of a single standard designed to ensure that premiums were not set too high to the use of more comprehensive standards designed to enhance rate stability and provide other protections for consumers. Prior to 2000, most states used a single, numerical standard when reviewing premium rates. This standard—called the loss ratio—was included in NAIC’s LTCI model regulation. Specifically, NAIC’s pre-2000 model stated that insurance companies must demonstrate an expected loss ratio of at least 60 percent when setting premium rates, meaning that the companies could be expected to spend a minimum of 60 percent of the premium on paying claims. For all policies where initial rates were subject to this loss ratio standard, proposed rate increases are subject to the same standard. While the loss ratio standard was designed to ensure that premium rates were not set too high in relation to expected claims costs, over time NAIC identified two key weaknesses in the standard. First, the standard does not prevent premium rates from being set too low to cover the costs of claims over the life of the policy. Second, the standard provides no disincentive for companies to raise rates, and leaves room for companies to gain financially from premium increases. In identifying these two weaknesses, NAIC noted that there have been cases where, under the loss ratio, initial premium rates proved inadequate, resulting in large rate increases and significant loss of LTCI coverage from consumers allowing their policies to lapse. To address the weaknesses in the loss ratio standard as well as to respond to the growing number of premium increases occurring for LTCI policies, NAIC developed new, more comprehensive model rate setting standards in 2000. These more comprehensive standards were designed to accomplish several goals, including improving rate stability. Among other things, the standards established more rigorous requirements companies must meet when setting initial LTCI rates and rate increases. For example, instead of a loss ratio requirement to demonstrate that a proposed premium is not too high, the standards require company actuaries to certify that a premium is adequate to cover anticipated costs over the life of a policy, even under “moderately adverse conditions,” with no future rate increases anticipated. Moderately adverse conditions could include, for example, below average returns on invested assets. To fulfill this requirement, company actuaries must include a margin for error in their pricing assumptions. Several regulators told us that allowing a margin for error may result in higher, but more stable, premium rates over the long term. In addition, while the more comprehensive standards no longer require companies to meet a loss ratio for initial premium rates, they establish a more stringent loss ratio—85 percent—for companies to meet when proposing premium increases. According to NAIC, this new loss ratio is intended to limit the financial benefits companies may gain from a rate increase. In addition to improving rate stability, the more comprehensive standards were also designed to inform consumers about the potential for rate increases and provide protections for consumers facing rate increases. To inform consumers about the potential for LTCI rate increases, the more comprehensive standards include, for example, a requirement for companies to disclose past rate increases to consumers applying for LTCI coverage. The standards also establish some additional protections for consumers facing rate increases, including providing certain consumers with the option of reducing their benefits. Table 1 describes selected rate setting standards added to NAIC’s LTCI model regulation in 2000 and the purpose of each standard in more detail. Although a growing number of consumers will be protected by the more comprehensive standards going forward, as of 2006 many consumers had policies that were not protected by these standards. Following the revisions to NAIC’s LTCI model in 2000, many states began to replace their loss ratio standard with more comprehensive rate setting standards based on NAIC’s changes. NAIC estimates that by 2006 more than half of states nationwide had adopted the more comprehensive standards. However, many consumers have policies not protected by the more comprehensive standards, either because they live in states that have not adopted these standards or because they bought policies issued prior to implementation of these standards. For example, as of December 2006, according to our analysis of NAIC and industry information, at least 30 percent of policies in force were issued in states that had not adopted the more comprehensive rate setting standards. Further, in states that have adopted the more comprehensive standards, many policies in force were likely to have been issued before states began adopting these standards in the early 2000s. The extent to which more states will adopt the more comprehensive standards is unclear. We found that of the 2 states in our 10-state review that had not adopted these standards as of January 2008, 1 state planned to adopt the standards. A regulator from the other state told us that the state had chosen not to adopt the standards, at least in part because its regulatory environment is already sufficiently rigorous. In states that have not adopted the more comprehensive standards for LTCI policies generally, federal standards for state Partnership programs provide additional protections for consumers purchasing Partnership policies in these states. In expanding authorization for Partnership programs, DRA required that Partnership policies adhere to certain of the rate setting standards added to NAIC’s LTCI model regulation in 2000, such as disclosure of past rate increases to consumers applying for coverage. Other standards, such as actuarial certification, were not required. As of February 2008, CMS reported that 24 states either had an approved Partnership program subject to DRA standards or a request to implement one pending. Of these 24 states, 7 had not implemented at least one of the more comprehensive rate setting standards required by DRA. Regulators from most of the states in our review said that they expect the rate setting standards added to NAIC’s model regulation in 2000 will improve rate stability and provide increased protections for consumers, though regulators also recognized that it is too soon to determine the effectiveness of the standards. Of the states in our review, regulators in all but one of the eight states that had adopted the more comprehensive standards told us that the standards would likely be successful. For example, regulators from one state emphasized that a significant amount of collaboration between regulators, insurance companies, and consumer advocates went into development of the standards. However, regulators in these eight states also said that not enough time has passed since implementation to know how well these standards will work, particularly in stabilizing LTCI rates. Some regulators explained that it might be as much as a decade before they are able to assess the effectiveness of these standards. Regulators from one state explained that rate increases on LTCI policies sold in the 1980s did not begin until the late 1990s, when consumers began claiming benefits and companies were faced with the costs of paying their claims. Further, though the more comprehensive standards aim to enhance rate stability, LTCI is still a relatively young product, and initial rates continue to be based on assumptions that may eventually require revision. For example, several company officials told us that estimates of lapse rates and other LTCI pricing assumptions have become more reliable over time. However, officials from some companies also told us that companies still face uncertainties in pricing LTCI, including forecasting investment returns and predicting the cost of long- term care in a delivery system that continues to evolve. State regulators from the 10 states in our review use other standards— beyond those included in NAIC’s LTCI model regulation—or practices to oversee rate setting, including several that are intended to enhance rate stability. Regulators from 3 of the states in our review told us that their state has standards intended to enhance the reliability of data used to justify rate increases. For example, 1 state has a standard that requires companies to justify rate increases using data combined or “pooled” from all policies that offer similar benefits—including data on the premium revenues and claims costs associated with these policies—rather than using only the data on the policy subject to the increase. The regulators from this state explained that such a standard improves reliability by normalizing data so that, for example, newer, more adequately priced policies offset older, underpriced policies. Regulators from 2 states in our review also told us that these standards are among their states’ most effective tools for improving rate stability. In addition to standards to enhance the reliability of data used to set rates, some states in our review have standards that limit the extent to which LTCI rates can increase. For example, one of the states we reviewed has a standard in place to cap premium rates at prevailing market rates for policies no longer being sold. Regulators from this state explained that capping premium rates on these policies sets an upper limit that companies can charge when requesting a rate increase. Regulators from another state told us that they have authority to fine companies for instituting cumulative rate increases that exceed a certain cap. Officials from one company confirmed that some states have standards to cap premium increase amounts. Beyond implementing rate setting standards, regulators from all 10 states in our review use their authority to review rates to reduce the size of rate increases or to phase in rate increases over multiple years. For example, state regulators told us that they may require companies to implement smaller increases than requested or negotiate with companies to reach an agreement on a smaller increase. In addition to working to reduce the size of the increases, regulators from some states said that to mitigate the effect of rate increases on consumers they may suggest that a company phase the increase in over multiple years. However, this approach only provides consumers with short-term relief. While state regulators work to reduce the effect of rate increases on consumers, regulators from six states explained that increases can be necessary to maintain companies’ financial solvency. Although some states are working to improve oversight of rate setting and to help ensure LTCI rate stability by adopting the more comprehensive standards and through other efforts, there are other reasons why some consumers may remain more likely to experience rate increases than others. In particular, consumers who purchased policies when there were more limited data available to inform pricing assumptions may continue to experience rate increases. Regulators from seven states in our review told us that rate increases are mainly affecting consumers with older policies. For example, regulators from one state told us that there are not as many rate increases proposed for policies issued after the mid-1990s. Regulators in five states explained that incorrect pricing assumptions on older policies are largely responsible for rate increases. Specifically, regulators explained that inaccurate assumptions about the number of consumers who would allow their policies to lapse led to rate increases. Officials from more than one company confirmed that mistakes in pricing older LTCI policies, including overestimating lapse rates, have played a significant role in the rate increases that have occurred. However, officials from one company told us that there are now more data available, including claims data compiled by the industry, increasing the company’s confidence in pricing LTCI. Consumers’ likelihood of experiencing a rate increase also may depend on the company from which they bought their policy. In our review of national data on rate increases by four judgmentally selected companies that together represented 36 percent of the LTCI market in 2006, we found variation in the extent to which they have implemented increases. For example, one company that has been selling LTCI for 30 years has increased rates on multiple policies since 1995, with many of the increases ranging from 30 to 50 percent. Another company that has been in the market since the mid-1980s has increased rates on multiple policies since 1991, with increases approved on one policy totaling 70 percent. In contrast, officials from a third company that has been selling LTCI since 1975 told us that the company was implementing its first increase as of February 2008. The company reported that this increase, affecting a number of policies, will range from a more modest 8 to 12 percent. Another company that also instituted only one rate increase explained that in cases where initial pricing assumptions were wrong, the company has been willing to accept lower profit margins rather than increase rates. While past rate increases do not necessarily increase the likelihood of future rate increases, they do provide consumers with information on a company’s record in having stable premiums. Finally, consumers in some states may be more likely to experience rate increases than those in other states, which company officials noted may raise equity concerns. Of the six companies we spoke with, officials from every company that has instituted a rate increase told us that there is variation in the extent to which states approve proposed rate increases. For example, officials from one company told us that when requesting rate increases they have seen some states deny a request and other states approve an 80 percent increase on the same rate request with the same data supporting it. Officials from another company told us that if they filed for a 25 percent increase in all states, they would expect to have varying amounts approved and have some states deny the proposed increase. Officials from two companies noted that such differences across states raises an equity issue for consumers. While some company officials told us that initial LTCI premiums are largely the same across states, variation in state approval of rate increases may mean that consumers with the same LTCI policy could face very different premium rates depending on where they live. Though some consumers may face higher increases than others, company officials also told us that they provide options to all consumers facing a rate increase, such as the option to reduce their benefits to avoid all or part of a rate increase. Our review of data on state approvals of rate increases requested by one LTCI company operating nationwide also indicated that consumers in some states may be more likely to experience rate increases. Specifically, since 1995 one company has requested over 30 increases, each of which affected consumers in 30 or more states. While the majority of states approved the full amounts requested in these cases, there was notable variation across states in 18 of the 20 cases in which the request was for an increase of over 15 percent. For example, for one policy, the company requested a 50 percent increase in 46 states, including the District of Columbia. Of those 46 states, over one quarter (14 states) either did not did not approve the rate increase request (2 states) or approved less than the 50 percent requested (12 states), with amounts approved ranging from 15 to 45 percent. The remaining 32 states approved the full amount requested, though at least 4 of these states phased in the amount by approving smaller rate increases over 2 years. (See fig. 1.) Variation in state approval of rate increase requests may have significant implications for consumers. In the above example, if the initial, annual premium for the policy was, for example, $2,000, consumers would see their annual premium rise by $1,000 in Colorado, a state that approved the full increase requested; increase by only $300 in New York, where a 15 percent increase was approved; and stay level in Connecticut, where the increase was not approved. While a smaller number of states approved a lesser amount of the rate increase than requested compared to the 32 states that approved the full increase, 3 of the states approving lesser amounts cumulatively represented nearly 20 percent of all active LTCI policies in 2006. To the extent that states with a large share of the LTCI market regularly approve lower rate increases than the amounts requested, more LTCI consumers could experience smaller rate increases. Although state regulators in our 10-state review told us that most rate increases have occurred for policies subject to the loss ratio standard, variation in state approval of proposed rate increases may continue for policies protected by the more comprehensive standards. States may implement the standards differently, and other oversight efforts, such as the extent to which states work with companies, also affect approval of increases. States in our review oversee claims settlement practices by monitoring consumer complaints and conducting market conduct examinations in an effort to ensure that companies are complying with claims settlement standards. Claims settlement standards in these states largely focus on timeliness, but there is notable variation in which standards states adopted and how states define timeliness. To identify violations of these standards, regulators from all 10 states in our review told us that they review consumer complaints and conduct examinations of companies’ claims settlement practices, with regulators from 7 states reporting one or more examinations under way as of March 2008. State regulators in several states told us that they are considering additional protections related to claims settlement, with some states awaiting the outcomes of ongoing examinations to determine what additions may be necessary. For example, regulators from 4 states told us that their state is considering an independent review process for consumer appeals of claims denials. The 10 states in our review have standards established by law and regulations for governing claims settlement practices. The majority of the standards, some of which apply specifically to LTCI and others that apply more broadly to various insurance products are designed to ensure that claims settlement practices are conducted in a timely manner. Specifically, the standards are designed to ensure the timely investigation and payment of claims and prompt communication with consumers about claims. In addition to these timeliness standards, states have established other standards, such as requirements for how companies are to make benefit determinations. While the 10 states we reviewed all have standards governing claims settlement practices, the states vary in the specific standards they have adopted as well as in how they define timeliness. For example, 1 state does not have a standard that requires companies to pay claims in a timely manner. For the 9 states that do have a standard, the definition of “timely” the states use varies notably—from 5 days to 45 days, with 2 states not specifying a time frame. In addition, 2 of 10 states do not require companies to provide explanation of delays in resolving claims, and the 8 that do require companies to explain delays vary in how many days the state allows delays to go unexplained. Federal laws governing tax-qualified and Partnership policies do not address the timely investigation and payment of claims or prompt communication with consumers about claims. The absence of certain standards and the variation in states’ definitions of “timely” may leave consumers in some states less protected from, for example, delays in payment than consumers in other states. (See table 2 for key claims settlement standards adopted by the 10 states in our review and examples of the variation in standards.) Given state variation, officials from four companies, which together represented 26 percent of the LTCI market in 2006, told us that they tailor their claims settlement practices nationwide to adhere to the most rigorous state standards. For example, officials from one company noted that they have adopted nationwide the most stringent state standard for timely payment of claims. Several officials added that they monitor changes in state standards in order to adjust their claims settlement practices. By tailoring their practices to adhere to the most rigorous state standards, companies may provide more uniform protection for consumers than would be provided under varying state standards. The states in our review primarily use two ways to monitor companies’ compliance with claims settlement standards (1) reviewing consumer complaints and (2) conducting market conduct examinations. The first way the states monitor compliance is by reviewing consumer complaints on a case-by-case basis and in the aggregate to identify trends in company practices. Regulators in all 10 of the states we reviewed said that monitoring LTCI complaints is one of the primary methods for overseeing compliance with claims settlement standards. When responding to complaints on a case-by-case basis, regulators in some states told us that they determine whether they can work with the consumer and the company to resolve the complaint or determine whether there has been a violation of claims settlement standards that requires further action. State regulators frequently resolve individual complaints by assisting consumers in obtaining payment. Regulators from 6 states told us that in response to complaints related to LTCI claims, state staff works with the company in question, for example, to determine if the consumer needs to provide additional documentation for a claim to be paid. In reviewing information on complaints related to LTCI from 3 states, we found that in 2006, about 50 percent of the 116 complaints related to either delays or denials eventually resulted in consumers receiving payment, with amounts in 1 state ranging from $954 to $29,910 per complaint. Regulators in some states also resolve consumer complaints by providing explanation to consumers or their family members for why a claim was denied. Regulators from 6 states told us that consumers sometimes do not understand or are not aware of the terms of their policies. For example, although most policies include an elimination period, state regulators in 1 state noted that consumers often do not understand it and submit claims for services received during this period, which are subsequently denied by the company. Regulators from four states also told us that they regularly review complaint data to identify trends in company practices over time or across companies, including practices that may violate claims settlement standards. Three of these states review these data as part of broader analyses of the LTCI market during which they also review, for example, financial data and information on companies’ claims settlement practices. However, regulators in three states noted that a challenge in using complaint data to identify trends is the small number of LTCI consumer complaints that their state receives. For example, information on complaints provided by one state shows that the state received only 54 LTCI complaints in 2007, and only 20 were related to claims settlement issues. State regulators told us that they expect the number of complaints to increase in the future as more consumers begin claiming benefits. In our review of complaint information from five states, we did not find that an upward trend in the number of complaints has begun, though the information indicates that the proportion of complaints related to claims settlement issues has increased over time. Specifically, we found that from 2001 to 2007, the percentage of all complaints about LTCI that were related to claims settlement issues increased from about 25 percent (215 of 846) to 44 percent (318 of 721) (see table 3). In addition to consumer complaints, the second way that states monitor company compliance with claims settlement standards is using market conduct examinations. These examinations may be regularly scheduled or, if regulators find patterns in consumer complaints about a company, they may initiate an examination, which generally includes a review of the company’s files for evidence of violations of claims settlement standards. For example, one state initiated an examination of a company’s consumer complaint files for 2005 through 2007 on the basis of three LTCI complaints made to the state. These complaints indicated a number of potential problems with the company’s claims settlement practices, including delays in payment and improper claims denials. Some states also coordinate market conduct examinations with other states—efforts known as multistate examinations—during which all participating states examine the claims settlement practices of designated companies. If state regulators identify violations of claims settlement standards during market conduct examinations, they may take enforcement actions, such as imposing fines or suspending the company’s license. As of March 2008, 4 of 10 states in our review reported taking enforcement actions against LTCI companies for violating claims settlement standards. Regulators from one state, for example, told us that they fined one company $100,000 for failure to promptly and properly pay LTCI claims. As of March 2008, regulators from 7 of the 10 states reported having ongoing examinations into companies’ claims settlement practices. Specifically, regulators from 2 states reported having an ongoing examination focused on a company’s practices in their state, regulators from 2 states reported participating in ongoing multistate examinations, and regulators from 3 states reported having both types of examinations under way. In addition to ongoing examinations, regulators in 1 state told us that the state is analyzing trends in claims settlement practices among the 14 companies with the largest LTCI market share in the state. If concerns are identified, regulators told us that this analysis may lead to a market conduct examination. Company officials that we spoke with noted that states have increased their scrutiny of claims settlement practices since mid-2007, after media reports of consumers experiencing problems receiving payments for claims. Officials in four companies we interviewed told us that their company had received requests for information about company claims settlement practices from several states. In addition, officials from three companies noted that states are examining companies’ claims settlement practices in more detail than they had previously. For example, officials from one company said that the rigor of states’ market conduct examinations has increased, both in terms of the number of case files state regulators examine and in terms of the scope of the information that regulators collect. Regulators from six of the states in our review reported that their state is considering or may consider adopting additional consumer protections related to claims settlement, such as additional standards. Of these six states, four have completed or expect to complete in-depth reviews of LTCI in their states, and two of the completed reviews have resulted in recommendations for additional claims settlement standards. For example, a report completed by Iowa in 2007 included a recommendation for adopting a standard requiring timely payment of claims by companies selling LTCI policies. As of March 2008, regulators from two of the six states told us that they were awaiting the results of ongoing NAIC data collection efforts or ongoing market conduct examinations before considering specific protections. The additional protection most frequently considered by the state regulators we interviewed is the inclusion of an independent review process for consumers appealing LTCI claims denials. Regulators from four of the states in our review told us that their states were considering establishing a means for consumers to have their claims issues reviewed by a third party independent from their insurance company without having to engage in legal action. Further, a group of representatives from NAIC member states was formed in March 2008 to consider whether to recommend developing provisions to include an independent review process in the NAIC LTCI models. Such an addition may be useful, as regulators from three states told us that they lack the authority to resolve complaints involving a question of fact, for example, when the consumer and company disagree on a factual matter regarding a consumer’s eligibility for benefits. Further, there is some evidence to suggest that due to errors or incomplete information companies frequently overturn LTCI denials. Specifically, data provided by four companies we contacted indicate that denials are frequently overturned by companies during the appeals process, with the percentage of denials overturned averaging 20 percent in 2006 among the four companies and ranging from 7 percent in one company to 34 percent in another. There is precedent for an independent review process for denied claims. For example, one state reported that an independent review process is available under its state law for appeals of denials of health insurance claims. Further, officials from one company in our review told us that the company had started implementing an independent review option for its LTCI consumers, though it had not selected the third-party reviewer as of February 2008. Finally, the FLTCIP includes an independent review process. However, the FLTCIP process remains largely untested as, according to OPM officials, only three consumers had made appeals as of April 2008. We received comments on a draft of this report from NAIC. NAIC compiled and summarized comments from its member states, and NAIC officials stated that member states found the report to be an accurate reflection of the current LTCI marketplace. However, NAIC officials also reported that states were concerned that the report seemed to critique certain aspects of state regulation without a balanced discussion and seemed to be making an argument for certain reforms. In particular, NAIC officials noted that states said the draft report highlighted the differences in state regulation of rates and the fact that new regulations are not typically made retroactive. NAIC officials also noted that as in every other area of state regulation, state laws differ based on markets, consumer needs, and political realities. NAIC officials added that state lawmakers and regulators must balance many different factors when developing rules and one size often does not fit all. Our draft reported differences in states’ oversight of rate setting and claims settlement practices without making any conclusions or recommendations. We reported both the extent to which NAIC model standards have been adopted and other standards and practices states have in place. Further, NAIC officials noted that states expend considerable resources to educate consumers so that they make informed decisions. While this may be the case, our review was focused on the oversight of rate setting and claims settlement practices because of recent concerns in these areas. We did not review states’ broader consumer education efforts related to long term care insurance. Finally, certain NAIC member states provided technical comments, which we incorporated into the report as appropriate. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to NAIC and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or dickenj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. To conduct case studies on oversight of long-term care insurance (LTCI), we selected a judgmental sample of 10 states on the basis of several criteria. First, we selected states that together accounted for at least 40 percent of all policies in force in 2006 and represented variation in terms of the number of policies in force. In addition, we selected states that were both congruent and not congruent with the National Association of Insurance Commissioners (NAIC) LTCI model act and regulation to reflect the variation in state oversight of the product. We also selected states that represented geographic variation. Finally, we considered the number of complaints the state reported receiving related to LTCI in 2006. (See table 4 for the list of selected states.) In addition to the contact named above, Kristi Peterson, Assistant Director; Susan Barnidge; Krister Friday; Julian Klazkin; Rachel Moskowitz; and Sara Pelton made key contributions to this report.
As the baby boom generation ages, the demand for long-term care services, which include nursing home care, is likely to grow and could strain state and federal resources. The increased use of long-term care insurance (LTCI) may be a way of reducing the share of long-term care paid by state and federal governments. Oversight of LTCI is primarily the responsibility of states, but over the past 12 years, there have been federal efforts to increase the use of LTCI while also ensuring that consumers purchasing LTCI are adequately protected. Despite this oversight, concerns have been raised about both premium increases and denials of claims that may leave consumers without LTCI coverage when they begin needing care. GAO was asked to review the consumer protection standards governing LTCI policies and how those standards are being enforced. Specifically, GAO examined oversight of the LTCI industry's (1) rate setting practices and (2) claims settlement practices. GAO reviewed information from the National Association of Insurance Commissioners (NAIC) on all states' rate setting standards. GAO also completed 10 state case studies on oversight of rate setting and claims settlement practices, which included structured reviews of state laws and regulations, interviews with state regulators, and reviews of state complaint information. GAO also reviewed national data on rate increases implemented by companies Many states have made efforts to improve oversight of rate setting, though some consumers remain more likely to experience rate increases than others. NAIC estimates that since 2000 more than half of states nationwide have adopted new rate setting standards. States that adopted new standards generally moved from a single standard that was intended to prevent premium rates from being set too high to more comprehensive standards designed to enhance rate stability and provide other protections for consumers. Although a growing number of consumers will be protected by the more comprehensive standards going forward, as of 2006 many consumers had policies not protected by these standards. Regulators in most of the 10 states GAO reviewed said that they expect these more comprehensive standards will be effective, but also recognized that more time is needed to know how well the standards will work in stabilizing premium rates. State regulators in GAO's review also use other standards or practices to oversee rate setting, several of which are intended to help keep premium rates more stable. Despite state oversight efforts, some consumers remain more likely to experience rate increases than others. Specifically, consumers may face more risk of a rate increase depending on when they purchased their policy or which state is reviewing a proposed rate increase on their policy. The 10 states in GAO's review oversee claims settlement practices by monitoring consumer complaints and completing examinations in an effort to ensure that companies are complying with claims settlement standards. Claims settlement standards in these states largely focus on timely investigation and payment of claims and prompt communication with consumers, but the standards adopted and how states define timeliness vary notably across the states. Regulators told GAO that they use consumer complaints to identify trends in companies' claims settlement practices, including whether they comply with state standards, and to assist consumers in obtaining payment for claims. In addition to monitoring complaints, these regulators also said that they use examinations of company practices to identify any violations in standards that may require further action. Finally, state regulators in 6 of the 10 states in GAO's review are considering additional protections related to claims settlement. For example, regulators from 4 states said that their states were considering an independent review process for consumers appealing claims denials. Such an addition may be useful, as some regulators said that they lack authority to resolve complaints where, for example, the company and consumer disagree on a factual matter regarding a consumer's eligibility for benefits. In commenting on a draft of this report, NAIC compiled comments from its member states who said that the report was accurate but seemed to critique certain aspects of state regulation, including differences among states, and make an argument for certain reforms. The draft reported differences in states' oversight without making any conclusions or recommendations.
Since 2004, Congress has authorized over $8 billion for medical countermeasure procurement. The Project BioShield Act of 2004 authorized the appropriation of $5.6 billion from fiscal year 2004 through fiscal year 2013 for the Project BioShield Special Reserve Fund, and funds totaling this amount were appropriated. The act facilitated the creation of a government countermeasure market by authorizing the government to commit to making the Special Reserve Fund available to purchase certain medical countermeasures, including those countermeasures that may not be FDA-approved, cleared, or licensed. In 2013, PAHPRA authorized an additional $2.8 billion to be available from fiscal year 2014 through fiscal year 2018 for these activities,funding has not yet been appropriated for these years. In addition to the but Special Reserve Fund, Congress has also made funding available through annual and supplemental appropriations to respond to influenza pandemics, including developing vaccines and other drugs. HHS is the primary federal department responsible for public health emergency planning and response, including medical countermeasure development, procurement, and distribution. HHS also coordinates with other federal departments, such as DHS, through PHEMCE. Within HHS, several offices and agencies have specific responsibilities for public health preparedness and response. HHS’s ASPR leads PHEMCE and the federal medical and public health response to public health emergencies, including strategic planning, medical countermeasure prioritization, and support for developing, procuring, and planning for the effective use of medical countermeasures. Within ASPR, BARDA—established by the Pandemic and All-Hazards Preparedness Act of 2006—oversees and supports advanced development and procurement of some medical countermeasures into the SNS. NIH conducts and funds basic and applied research and early development needed to develop new or enhanced medical countermeasures and related medical tools for CBRN and infectious disease threats. CDC maintains the SNS, including purchasing commercially available products as necessary, and supports state and local public health departments’ efforts to detect and respond to public health emergencies, including providing guidance and recommendations for the mass dispensing and use of medical countermeasures from the SNS. FDA assesses the safety and effectiveness of medical countermeasures; regulates their development; approves, clears, or licenses them; and conducts postmarket surveillance as part of its overall role to assess the safety and effectiveness of medical products. FDA also provides technical assistance to help ensure that product development meets FDA’s regulatory requirements and provides technical support for the development of regulatory science tools. FDA may authorize the emergency use of medical products that have not yet been approved, cleared, or licensed or were approved, cleared, or licensed only for other uses. DHS develops material threat assessments (MTA), in coordination with HHS, to assess the threat posed by given CBRN agents or classes of agents and the potential number of human exposures in plausible, high- consequence scenarios. DHS uses the MTAs to determine which CBRN agents pose a material threat sufficient to affect national security and to provide HHS with a basis for determining needed countermeasures for those agents. DHS also develops terrorism risk assessments (TRA) to assess the relative risks posed by CBRN agents based on variable threats, vulnerabilities, and consequences. HHS’s PHEMCE is responsible for establishing civilian medical countermeasure priorities for CBRN and emerging infectious disease threats, including influenza; coordinating federal efforts to research, develop, and procure medical countermeasures to enhance preparedness and response for public health threats; and developing policies, plans, and guidance for the use of countermeasure products in a public health emergency. PHEMCE is composed of officials from ASPR, including BARDA; CDC; FDA; NIH; and other federal departments, including the Departments of Agriculture, Defense, Homeland Security, and Veterans Affairs. HHS and PHEMCE establish federal medical countermeasure development and procurement priorities through a multistep process. This process includes assessing the threat posed by CBRN agents and the potential consequences they pose to public health, determining medical countermeasure requirements—the type of countermeasure (vaccines, drugs, or medical devices such as diagnostics), the amount needed, and characteristics of the countermeasures (such as formulations, dosing, and packaging)—for these agents, evaluating public health response capability, and developing and procuring countermeasures against these CBRN agents. (See fig. 1.) The 2012 PHEMCE Strategy lays out the four PHEMCE strategic goals and their underlying objectives for building HHS’s countermeasure capabilities to respond to a public health emergency. The 2012 PHEMCE Implementation Plan updates the 2007 implementation plan and describes the activities that HHS and its interagency partners plan to conduct to achieve the four strategic goals and their associated objectives, the medical countermeasures HHS wants to develop and procure, and the capabilities HHS wants to build to support countermeasure development and procurement. The plan also includes 72 items that HHS selected as key priorities for fulfilling PHEMCE’s strategic goals within the next 5 years, which the agency placed into three categories. For the purposes of this report we refer to the items in these categories as “priority activities,” “priority threat-based approaches,” and “priority capabilities.” The 33 priority activities reflect activities that support PHEMCE’s overall mission and include pursuits such as developing systems to track countermeasure activities across all PHEMCE partners, enhancing national laboratory capabilities, and developing guidance documents and information for the public on using medical countermeasures in an emergency. (See table 1 for examples of PHEMCE priority activities by strategic goal.) In addition to the 33 priority activities, the 25 items identified as priorities for threat-based approaches are intended to directly address threats such as anthrax or smallpox. These priorities include pursuits such as publishing updated clinical guidance for anthrax countermeasures; developing and qualifying with FDA animal models to test the safety and efficacy of medical countermeasures for certain biological, radiological, and nuclear threats; and developing new plans for the distribution and dispensing of pandemic influenza antivirals. The remaining 14 items identified as priority capabilities reflect what HHS refers to as crosscutting capabilities. The priority capabilities are a mix of programs or technological applications that may, for example, support the development of countermeasures for a range of existing CBRN threats or for any new threats that may emerge in the future, or build infrastructure to provide countermeasure developers assistance with advanced development and manufacturing services. The priority capabilities include such pursuits as initiating a research program to fill gaps in knowledge in the area of patient decontamination in a chemical incident and establishing a network of facilities to support the filling and finishing of vaccines and other countermeasures. In addition to the 72 items HHS selected as key priorities for fulfilling PHEMCE’s strategic goals, the implementation plan also identifies the medical countermeasures that constitute HHS’s priorities for development and procurement to fulfill strategic goal 1, which we refer to as “priority countermeasures” for the purposes of this report. (See table 2.) Many of the threat-specific countermeasures for which PHEMCE set procurement priorities in 2007 continue to be priorities for development and procurement in the 2012 plan, such as anthrax vaccine, smallpox antivirals, chemical agent antidotes, and diagnostic devices for radiological and nuclear agents. The 2012 plan also includes pandemic influenza countermeasures and nonpharmaceutical countermeasures, such as ventilators, as priorities, whereas the 2007 plan focused on CBRN medical countermeasures only. HHS has established timelines and milestones for the 72 priority activities, threat-based approaches, and capabilities identified in the 2012 PHEMCE Implementation Plan as key to fulfilling PHEMCE’s strategic goals. However, while HHS has developed spending estimates for its priority medical countermeasures for internal planning purposes, it has not made these estimates publicly available, as we previously recommended in 2011. HHS has established timelines and milestones for the 72 items it selected as key priorities for fulfilling PHEMCE’s strategic goals. Leading practices for program management call for establishing time frames and milestones as part of a plan to ensure that organizations achieve intended results.In the implementation plan, HHS has assigned each of the 33 priority activities, the 25 priority threat-based approaches, and the 14 priority capabilities to one of three time frames for completion—near-term (fiscal years 2012 through 2014), midterm (fiscal years 2015 through 2017), and long-term (fiscal year 2018 and beyond). In addition, HHS has placed PHEMCE’s priority countermeasures into these time frames. All but 2 of the 33 priority activities, and all of the priority threat-based approaches and capabilities, are slated for completion in either the near term or the midterm. HHS has also identified deliverables and milestones for some of the priority activities, threat-based approaches, and capabilities, and assigned them more specific timelines. For 21 of the 33 priority activities, 10 of the 25 priority threat-based approaches, and 8 of the 14 priority capabilities, HHS and the PHEMCE agency or office responsible for carrying out the activity have identified specific deliverables intended to complete them. PHEMCE partners have tied each deliverable to a specific milestone or set of milestones, which delineate the steps necessary to complete the deliverable. In addition, the deliverables and milestones may have more specific timelines, such as an actual month or year of expected completion within the broader multiyear near- or midterm time frame. Examples of deliverables, milestones, and more specific timelines for PHEMCE priorities include the following: For the priority activity that states that ASPR is to lead PHEMCE in developing or updating medical countermeasure requirements for certain CBRN threats by the end of fiscal year 2014, ASPR has identified the requirements for each specific threat—such as requirements for countermeasures for mustard gas and other blister agents—as the individual deliverables for this activity. The blister agents requirement deliverable has four associated milestones that reflect the various activities of a PHEMCE working group to develop the requirements and the levels of PHEMCE and HHS approval needed, culminating in the approval by the ASPR Assistant Secretary by September 2013. For the priority threat-based approach of qualifying animal models for biological threats, the deliverable is FDA qualification of the animal model, and the three milestones are the development of animal models for anthrax, plague, and tularemia in fiscal year 2015. For the priority capability of initiating funding for the development of diagnostic systems for biological and chemical threat agents, and systems to identify and characterize unknown threats, the deliverable is NIH’s awarding of funds to eligible applicants; the set of milestones for this deliverable are obtaining NIH approval to publish a solicitation for proposals for development of the diagnostics, publishing the solicitation in July 2013, and making awards in fiscal year 2014. NIH also plans to award additional funds in fiscal year 2015 for the development of multiplex diagnostic platforms for multiple threats. For the priority countermeasures, HHS officials told us that the department includes specific milestones in the contracts it awards to developers; these milestones reflect the expected course for research and development, such as holding and completing clinical trials to test the efficacy of a countermeasure or submitting inventory and storage plans, and have associated completion dates. For the remaining 12 priority activities, 15 priority threat-based approaches, and 6 priority capabilities, HHS has not established specific deliverables with milestones and timelines other than the overall completion of the priority within the specified near- or midterm time frame. HHS officials told us that some activities do not have specific timelines because HHS considers them to be ongoing activities that PHEMCE conducts regularly. For example, at least every 18 months, ASPR conducts formal reviews across participating PHEMCE agencies of medical countermeasure portfolios for specific threats in order to monitor progress in developing and procuring medical countermeasures for those threats, identify remaining gaps and challenges to developing and procuring countermeasures, and develop potential solutions. For activities in the implementation plan that are slated for completion in the long term, HHS officials said that they intend to develop more specific timelines as the near- and midterm activities are completed. ASPR tracks the progress of participating PHEMCE partners in implementing the priority activities, threat-based approaches, and capabilities by holding monthly meetings to collect information on progress. According to HHS officials, during these monthly meetings, PHEMCE participants discuss their progress in completing deliverables, potential barriers to completion, and any options to help mitigate these barriers. ASPR officials told us they rely on the PHEMCE partner responsible for the activity to have adequate project management controls in place to determine the amount of progress that the partner agency has made. If an agency anticipates delays in or barriers to completing and meeting certain milestones, ASPR officials may assist in identifying additional support within PHEMCE partner agencies or within other federal agencies. For example, HHS officials told us that for one priority activity’s deliverable—developing requirements for anthrax antitoxins—CDC and FDA officials differed in their professional opinions on guidance for clinicians to administer the drug. PHEMCE senior management worked with the agencies to develop consensus wording for the guidance document to complete that deliverable. ASPR officials told us that they enter information collected in the meetings into a spreadsheet that contains descriptions of the PHEMCE priority activities, threat-based approaches, and capabilities; their associated deliverables, milestones, and timelines; and information on current progress, barriers to completion, and mitigation options. ASPR follows up with PHEMCE partners after the meetings to obtain any additional information, if necessary. ASPR distributes the finalized spreadsheet to PHEMCE partners about 1 week in advance of the next monthly meeting for them to use as reference for that meeting. ASPR officials told us they developed the tracking spreadsheet in response to the recommendation in our 2011 report that HHS develop a written strategy to monitor the implementation of recommendations from HHS’s 2010 PHEMCE review and incorporated the PHEMCE priorities into the spreadsheet when HHS updated the implementation plan. At the completion of our review, PHEMCE was halfway through its near- term period of fiscal year 2012 through fiscal year 2014. As of September 2013 (the most recent information available): PHEMCE partners reported completing five deliverables for the 21 priority activities. For example, for the priority activity that specifies that HHS, DHS, and other federal partners are to formalize roles, responsibilities, policies, and procedures for conducting the next generation of MTAs and TRAs, HHS and DHS completed one of two deliverables by developing and cosigning a strategic implementation plan to conduct MTAs. PHEMCE partners reported completing three deliverables for the 10 priority threat-based approaches. For example, for one of the threat- based approaches, PHEMCE partners report completing the sole deliverable of developing guidance that establishes the order in which different groups of affected individuals would receive anthrax vaccination in a public health emergency. The completion of the three deliverables resulted in the completion of three priority threat-based approaches. PHEMCE partners reported completing two deliverables for the eight priority capabilities. For example, for one of the priority capabilities, PHEMCE partners have reported completing the sole deliverable that specifies that BARDA will initiate a research program to address knowledge gaps in chemical decontamination of exposed individuals by awarding a contract to a university to gather data and develop decontamination procedures. The completion of the two deliverables resulted in the completion of two priority capabilities. HHS has not provided publicly available spending estimates for research, development, or procurement for the countermeasures it identified as priorities in the 2012 implementation plan. We previously recommended that HHS provide more specific information on anticipated countermeasure spending when it updated its 2007 plan. Additionally, PAHPRA directs HHS to include anticipated funding allocations for each countermeasure priority in the PHEMCE strategy and implementation plan. The implementation plan contains information on the source of the funds for research, development, and procurement, such as the Special Reserve Fund. However, the plan does not include any estimates of how much of these funds HHS may spend to develop or procure specific priority countermeasures. HHS officials told us that while PHEMCE has developed spending estimates for internal planning, they are hesitant to provide these estimates to manufacturers because they do not want to create the expectation that the estimates would reflect any final contract amounts. In addition, anticipated spending estimates for future years may be unreliable because, according to HHS officials, the Special Reserve Fund will be appropriated annually after fiscal year 2014, as opposed to the fiscal year 2004 appropriation, which appropriated funds for a 10-year period. Additionally, officials stated that because HHS published the PHEMCE Implementation Plan prior to the passage of PAHPRA, the department did not include any spending estimates in the plan because it was unaware that PAHPRA would include that requirement. HHS officials said that they plan to include estimates in the next iteration of the plan, which they anticipate publishing in September 2014, based on the time frames laid out in PAHPRA. However, the nature and format of the spending estimates that would be included in the plan had not been determined. As we stated in our previous recommendation, information on anticipated spending would allow HHS’s industry partners to suitably target research and development to fulfill PHEMCE’s countermeasure priorities, especially in tighter budget climates. While HHS officials expressed concerns regarding sharing internal spending estimates and the short-term nature of annual appropriations, these concerns could be addressed by agency communications with manufacturers when providing the spending estimates to make clear that spending estimates may not reflect final contract amounts, which depend on enacted appropriations levels, among other factors. Developing and procuring medical countermeasures is a complex process that requires engagement across the federal government and with countermeasure developers in private industry. HHS has strengthened PHEMCE planning and oversight and has made progress in developing and procuring some medical countermeasures. However, given its almost 10-year efforts and the continuing lack of available countermeasures to fulfill PHEMCE’s many priorities, HHS would benefit from sharing information on its anticipated spending estimates with industry, to assist countermeasure developers with long-term business planning. PAHPRA’s requirement for HHS to include spending estimates for each medical countermeasure priority in future PHEMCE implementation plans is consistent with our 2011 recommendation. HHS’s plans to include more specific spending estimates in future plan updates could help implement both this requirement and our 2011 recommendation, provided the department makes meaningful estimates of spending for countermeasure research, development, and procurement available to industry. These estimates—or ranges of estimates—will provide HHS’s industry partners with more transparency on anticipated returns on investment in the face of competing priorities for developing other drugs with a commercial market. We believe the value of making this information available outweighs HHS’s concerns, especially those related to uncertainty over future appropriations; anticipated countermeasure spending would provide industry with the information it needs to determine whether and how to suitably target their research and development programs in tight budget climates. We provided a draft of this report to HHS, and its comments are reprinted in appendix II. In its comments, HHS acknowledged the effort we have taken to document HHS’s tracking processes for the activities in the 2012 PHEMCE Implementation Plan. HHS commented that the 72 activities we focused on in this review—which were described in the implementation plan as key to HHS’s efforts in the near and midterm—were a subset of 255 near- and midterm activities delineated in the implementation plan and that these 72 items were meant to be an illustrative but not comprehensive list of priorities. Further, HHS stated that it considered all 255 near- and midterm activities as priorities. HHS provided information on its efforts to track its progress on the remainder of these items that we did not discuss in the report and to establish deliverables and interim milestones for the activities slated for the midterm (fiscal years 2015 through 2017) as that period approaches. Finally, HHS provided information on its efforts to quantify its resource needs and provide more transparent anticipated spending information for its medical countermeasure development efforts while maintaining the integrity of the federal contracting process. HHS stated that it is working to find a compromise solution that will provide this transparency in light of statutory requirements and GAO’s 2011 recommendation. HHS also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Health and Human Services. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or dsouzav@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The Department of Health and Human Services (HHS) spent approximately $3.6 billion in advanced research, development, and procurement of chemical, biological, radiological, and nuclear (CBRN) and pandemic influenza medical countermeasures from fiscal year 2010 through fiscal year 2013. Of this amount, HHS spent 30 percent for countermeasures against influenza, 20 percent for smallpox countermeasures, and 19 percent for anthrax countermeasures. (See fig. 2.) The spending on influenza countermeasures reflects, in part, HHS’s response to the 2009 H1N1 influenza pandemic using annual and supplemental funds appropriated for that response. Of HHS’s total medical countermeasure spending of $3.6 billion, from fiscal year 2010 through fiscal year 2013, HHS spent almost $2.1 billion on contracts dedicated to advanced research and development, of which HHS’s Biomedical Advanced Research and Development Authority (BARDA) spent nearly $700 million (almost 34 percent) for influenza antivirals, diagnostics, and vaccines. (See table 3.) Of the remaining $1.5 billion, HHS spent nearly $403 million on contracts dedicated to the procurement of pandemic influenza antivirals and vaccines. (See table 4.) BARDA also spent almost $1.2 billion on contracts dedicated to both advanced research and development and procurement of CBRN medical countermeasures. (See table 5.) In addition to the contracts that have already been awarded, HHS issues annual announcements for additional funding opportunities in the areas of advanced research and development of CBRN medical countermeasures; advanced development of medical countermeasures for pandemic influenza; and innovative science and technology platforms for medical countermeasure development. The announcements state anticipated funding for the overall program. For example, the announcement for CBRN countermeasure advanced research and development states that anticipated funding for the overall effort—not per award—ranges from an estimated $2 million to an estimated $415 million, subject to congressional appropriations, and does not reflect a contractual obligation for funding. In addition to the contact named above, Karen Doran, Assistant Director; Shana R. Deitch; Carolyn Feis Korman; Tracey King; and Roseanne Price made significant contributions to this report. National Preparedness: Efforts to Address the Medical Needs of Children in a Chemical, Biological, Radiological, or Nuclear Incident. GAO-13-438. Washington, D.C.: April 30, 2013. National Preparedness: Improvements Needed for Measuring Awardee Performance in Meeting Medical and Public Health Preparedness Goals. GAO-13-278. Washington, D.C.: March 22, 2013. High-Containment Laboratories: Assessment of the Nation’s Need Is Missing. GAO-13-466R. Washington, D.C.: February 25, 2013. National Preparedness: Countermeasures for Thermal Burns. GAO-12-304R. Washington, D.C.: February 22, 2012. Chemical, Biological, Radiological, and Nuclear Risk Assessments: DHS Should Establish More Specific Guidance for Their Use. GAO-12-272. Washington, D.C.: January 25, 2012. National Preparedness: Improvements Needed for Acquiring Medical Countermeasures to Threats from Terrorism and Other Sources. GAO-12-121. Washington, D.C.: October 26, 2011. Influenza Pandemic: Lessons from the H1N1 Pandemic Should Be Incorporated into Future Planning. GAO-11-632. Washington, D.C.: June 27, 2011. Influenza Vaccine: Federal Investments in Alternative Technologies and Challenges to Development and Licensure. GAO-11-435. Washington, D.C.: June 27, 2011. National Preparedness: DHS and HHS Can Further Strengthen Coordination for Chemical, Biological, Radiological, and Nuclear Risk Assessments. GAO-11-606. Washington, D.C.: June 21, 2011. Public Health Preparedness: Developing and Acquiring Medical Countermeasures Against Chemical, Biological, Radiological, and Nuclear Agents. GAO-11-567T. Washington, D.C.: April 13, 2011. Combating Nuclear Terrorism: Actions Needed to Better Prepare to Recover from Possible Attacks Using Radiological or Nuclear Materials. GAO-10-204. Washington, D.C.: January 29, 2010.
Public health emergencies--the 2001 anthrax attacks, the 2009 H1N1 influenza pandemic, and others--have raised concerns about national vulnerability to threats from chemical, biological, radiological, and nuclear agents and new infectious diseases. There are some medical countermeasures--drugs, vaccines, and medical devices such as diagnostics--available to prevent, diagnose, or mitigate the public health impact of these agents and diseases, and development continues. HHS leads federal efforts to develop and procure countermeasures through the interagency PHEMCE. The Pandemic and All-Hazards Preparedness Reauthorization Act of 2013 mandated GAO to examine HHS's and PHEMCE's planning documents for medical countermeasure development and procurement needs and priorities. This report examines the extent to which HHS developed timelines, milestones, and spending estimates for PHEMCE priorities. GAO reviewed relevant laws; analyzed HHS's 2012 PHEMCE Strategy and Implementation Plan, HHS's tools for tracking the implementation of PHEMCE activities, and data on countermeasure spending from fiscal years 2010 through 2013; and interviewed HHS officials. The Department of Health and Human Services (HHS) has established timelines and milestones for the 72 Public Health Emergency Medical Countermeasures Enterprise (PHEMCE) priorities--33 activities, 25 threat-based approaches, and 14 capabilities--that HHS selected as key to fulfilling PHEMCE strategic goals. However, HHS has not made spending estimates for its medical countermeasure development or procurement priorities (priority countermeasures) publicly available. In the PHEMCE implementation plan, HHS has grouped the 72 PHEMCE priorities into three time frames for completion--near-term (fiscal years 2012 through 2014), midterm (fiscal years 2015 through 2017), and long-term (fiscal year 2018 and beyond). For 21 priority activities, 10 priority threat-based approaches, and 8 priority capabilities, HHS and PHEMCE have identified specific deliverables, each tied to a milestone or set of milestones that delineate the steps necessary to complete deliverables, and established more specific timelines for completion of deliverables and milestones. For example, HHS's Office of the Assistant Secretary for Preparedness and Response (ASPR) is to lead the development of medical countermeasure requirements, which outline countermeasure quantity, type, and desired characteristics. Deliverables are the threat-specific requirements, such as for antidotes for mustard gas and other blister agents. Milestones for mustard gas antidote requirements reflect the PHEMCE activities to develop the requirements and the necessary approvals; the milestones are tied to interim timelines and culminate in approval by the ASPR Assistant Secretary by September 2013. HHS has not established specific deliverables, milestones, or timelines for the remaining 12 priority activities, 15 priority threat-based approaches, and 6 priority capabilities other than their overall completion within the specified near- or midterm time frame. HHS monitors progress in completing deliverables and milestones for the priorities monthly, with PHEMCE partners meeting to discuss potential barriers to completing deliverables or meeting milestones and possible options to mitigate these barriers. As of September 2013 (the most recent information available), HHS reported that PHEMCE partners have completed 10 deliverables for the 72 priorities, resulting in completion of 5 priorities. GAO did not examine the status of the priorities that did not have specific deliverables, timelines, and milestones. HHS has developed spending estimates for priority countermeasures for internal planning purposes but has not made them publicly available. In 2011, GAO recommended that HHS provide more specific anticipated spending information in an updated plan to assist with long-term planning. HHS's 2012 plan contains information on how countermeasures may be funded, such as through advanced development funds, but does not include estimates of how much PHEMCE may spend to develop specific countermeasures. HHS officials said they are hesitant to provide estimates because they do not want to create the expectation that estimates would reflect final contract amounts. However, consistent with our prior recommendation and Pandemic and All-Hazards Preparedness Reauthorization Act requirements, HHS plans to include spending estimates in the next iteration of the plan, anticipated in September 2014, but has not determined the nature and format of the estimates that would be included. Providing estimates would allow HHS's industry partners to suitably target research and development to fulfill countermeasure priorities, especially in tighter budget climates. Although GAO is not making any new recommendations, based on prior work GAO is continuing to emphasize its 2011 recommendation that HHS make more specific anticipated spending information available to countermeasure developers. In its comments, HHS discussed its efforts to develop spending estimates.
This section discusses the organization of FWS, the provisions of the ESA, and the life cycle of the ABB. Among its duties, FWS is responsible for administering the ESA for certain species, including terrestrial species, such as the ABB. FWS headquarters, regions, and field offices are responsible for implementing the ESA within their area of responsibility. Since 2008, FWS’s Oklahoma Ecological Services Field Office within FWS’s Southwest Region has served as the lead field office for the ABB. Figure 1 shows a map of FWS regions and the Ecological Services field offices in areas where ABBs are known or believed to be present. The purposes of the ESA include providing a means whereby the ecosystems upon which endangered species and threatened species depend may be conserved and providing a program for the conservation of such endangered species and threatened species. Section 4 of the ESA contains the requirements and processes for listing or delisting a species as endangered or threatened, designating critical habitat, and developing a recovery plan for a listed species. Sections 9 and 10 of the ESA generally prohibit the “take” of endangered species unless the take is incidental to, and not the purpose of, carrying out an otherwise lawful activity. Section 7 of the ESA and its implementing regulations direct a federal agency to consult with FWS when the agency determines that an action it authorizes, funds, or carries out may affect a listed species or critical habitat. Federal actions requiring consultation under section 7 include issuing nonfederal entities a permit or license for their activities. For example, oil and gas companies are required to get a permit from the Bureau of Land Management before drilling into a federally owned mineral estate. If the agency, with FWS’s concurrence through informal consultation, determines that the proposed action is not likely to adversely affect the listed species or its critical habitat, then formal consultation is not required. Formal consultation usually ends with FWS issuing a biological opinion for the proposed action, which may include an incidental take statement containing provisions that the project proponent must comply with to minimize the project’s impact on the species. Under section 10 of the ESA, for actions by project proponents that might take a listed species and that do not have a federal nexus—such as federal funding, approval, or permit— the Secretary of the Interior may issue permits to allow “incidental take” of listed species. Table 1 summarizes key provisions of the ESA. At about 1-1/2 inches long, the ABB is the largest of the North American carrion beetles, known for its orange-red markings and named for its unique behavior of burying animal carcasses—such as birds and small mammals—to provide a source of nourishment for its developing young. ABBs depend on dead animals for food and reproduction. The ABB is an annual species that lives underground and emerges nocturnally when surface temperatures consistently exceed 60 degrees Fahrenheit. Once emerged, the ABB seeks a suitable carcass and competes for a mate. The mated pair then buries the carcass, which the ABB uses to sustain its young. The ABB is a winged insect, and, according to FWS’s evaluation of available research, it can travel up to 18 miles in one night. The ABB pair raises its young underground using chemical secretions to preserve the carcass for its offspring. Figure 2 shows two ABB specimens. Through its actions to find and bury carcasses in the soil, ABBs are beneficial in controlling pests, converting carcasses into soil nutrients, and aerating the soil. To preserve the carcasses of their prey, ABBs secrete chemicals that researchers are studying for applicability in treating bacterial infections, preventing fungal growth, and preserving meat at room temperature for human consumption. According to scientists, ABBs also benefit human health and agriculture by reducing disease vectors. Specifically, ABBs limit outbreaks of flies and other animals that could affect livestock production. ABBs are one of the few insects that provide parental care for their offspring. In addition, ABBs are considered an indicator species that is useful in evaluating the overall health of the environment. Figure 3 shows the life cycle of the typical ABB. According to FWS documentation, the ABB, which was once found in more than 30 states, had disappeared from over 90 percent of its historical range by March 2008. The exact reasons for the decline of the ABB are unknown. However, according to FWS documentation, biologists have identified some potential reasons for the decline, such as the elimination or decline of appropriate-sized carcasses, habitat loss and fragmentation due to widespread agriculture and development, and increased competition from other animals and invasive species. According to officials at FWS, the agency improves its knowledge about the ABB through research and scientific surveys to detect and record the presence of the ABB in specific locations, which must be conducted during the ABB’s limited active season. Appendix II provides more detail about the species’ current and historical range. FWS has sought to avoid and minimize potential adverse impacts on the ABB from construction and other projects by discussing mitigation options with project proponents and by requiring mitigation activities. These discussions can result in project proponents choosing to incorporate mitigation options into project proposals. FWS may also require project proponents to take mitigation actions specified in incidental take statements included in biological opinions it issues or habitat conservation plans it approves. To help monitor project proponents’ mitigation actions, FWS records information about certain types of these discussions in its TAILS database. In their discussions with project proponents about protecting endangered and threatened species, FWS officials said they use the principles contained in the agency’s 1981 mitigation policy, which outlines a hierarchy of actions to address potential harm to fish and wildlife resources that can occur as a result of construction and other projects. The first step in this hierarchy of actions is to avoid any impact on the listed species, such as by relocating the construction site outside the species’ habitat. The second step is to minimize the impact on the species, such as by placing restrictions on when construction or other activities can occur. According to FWS documentation, FWS can recommend a combination of avoidance and minimization measures to protect listed species. FWS has worked with project proponents to develop avoidance measures for the ABB in an effort to help ensure that projects do not have a direct or indirect adverse impact on the ABB. For example, some Ecological Services field offices recommend that project proponents conduct presence surveys for the ABB if the project is located where ABBs may be present, and, according to FWS officials, project proponents often choose to conduct such surveys. According to FWS documentation, if surveys indicate that no ABB are present within the project area, the project proponent may conduct its project at the proposed location with concurrence from FWS that the project is not likely to adversely affect the ABB. Other examples of avoidance efforts for the ABB include the following: Officials in the South Dakota Ecological Services Field Office said that sometimes project proponents have relocated their projects to avoid potential harm to the ABB. For example, in 2009 a project proponent selected a site for a wind development project outside the ABB’s range in South Dakota, and officials said that ABB presence was likely one factor that influenced the selection of that site. According to these officials, project proponents in South Dakota are able to select alternative project sites to avoid potential impacts on the ABB, in general, because ABB are known to be present in only a few counties in the state, where little development occurs. In 2011, the New England Ecological Services Field Office and a project proponent agreed that the project proponent could avoid harming the ABB for an airport lighting project on Block Island in Rhode Island by eliminating certain activities that could cause ground disturbance. For example, the project proponent agreed, among other things, to leave buried cable in place and decided not to excavate existing lighting poles in areas where ABB could be living. Officials in the Oklahoma Ecological Services Field Office stated that they provide project proponents in Oklahoma with the option to avoid take of the ABB by locating projects in habitat unfavorable to the ABB or where surveys indicate no ABB presence in the area. For example, FWS considers land to be unfavorable to the ABB if it is tilled on a regular basis or located in urban areas with paved surfaces or roadways. FWS has also worked with project proponents in other cases to minimize potential impacts on the species when avoidance was not feasible. For example, some incidental take statements included in FWS biological opinions discuss reducing disturbances to soil in areas considered suitable habitat for the ABB and require restoration of any soil that is disturbed in these areas to its natural state after construction as ways for project proponents to minimize their projects’ potential impacts on the ABB. Other examples of minimization efforts for the ABB include the following: In 2009, the Kansas Ecological Services Field Office required a project proponent to mow vegetation in areas that would be directly disturbed during the installation of a water pipeline. According to FWS’s biological opinion for this project, mowing vegetation on at least a monthly basis during the ABB’s active period would make the area less attractive to the ABB and therefore help minimize potential adverse impacts on the species by reducing the likelihood that ABB would be present in the areas where the project proponent would be disturbing the ground. In 2010, the South Dakota Ecological Services Field Office issued a biological opinion to the Federal Highway Administration for stream crossing projects. The incidental take statement included in the biological opinion required that the agency use construction practices that would minimally impact suitable habitat for the ABB adjacent to the project area. In 2014, the Oklahoma Ecological Services Field Office developed the Oil and Gas Industry Conservation Plan. According to that plan, its purpose is to provide a voluntary process for oil and gas project proponents to obtain permits for incidental take of the ABB from their projects that are not funded, authorized, or carried out by federal agencies. In order to be eligible for a permit under the plan, project proponents must agree to implement certain measures to minimize potential impacts on the ABB from their projects. For example, project proponents must agree to reduce the use of motor vehicles, machinery, or heavy equipment, which can result in take of the ABB. In 2015, the Nebraska Ecological Services Field Office conducted a section 7 consultation with the Western Area Power Administration for a wind energy development project in Nebraska. In the course of this consultation, the Nebraska Ecological Services Field Office and the project proponent agreed to several measures, including minimizing, to the extent possible, the use of artificial lighting that can attract insects like the ABB and result in take of the species. The project proponent also agreed to minimize the use of pesticides and avoid using them during the ABB’s active season, and incorporated both of these measures into its project proposal. If a federal agency is involved in authorizing, funding, or carrying out a proposed project, FWS’s discussions with project proponents about options to avoid and minimize potential adverse impacts on the ABB can occur within the context of consultations under section 7 of the ESA. To help monitor actions, FWS tracks information on its formal section 7 consultation activities using its TAILS database. Table 2 shows the number of ABB-related formal section 7 consultation activities by FWS regions and field offices from fiscal years 2008 through 2015. The Oklahoma Ecological Services Field Office conducted 46 out of 118 of the formal consultations on the ABB during this period—more than one-third of those conducted nationwide. FWS also uses TAILS to track informal activities conducted under section 7 of the ESA, such as informal consultations and technical assistance, but FWS officials said that Ecological Services field offices differ in how they interpret and record these informal activities. According to FWS officials, FWS field offices also work with project proponents to avoid and minimize potential adverse impacts on the ABB and other listed species for projects that are not funded, authorized, or carried out by a federal agency, but the details of the technical assistance through such discussions are not always included in TAILS. According to FWS officials, FWS is planning to develop standard operating procedures for using TAILS to improve the reliability of the data. According to these officials, FWS anticipates completing these standard operating procedures for TAILS in 3 years. While avoiding and minimizing are FWS’s preferred alternatives, they may not always be practical for project proponents. For example, it may not be practical to relocate a road project or an oil and gas well to avoid ABB habitat or to wait for the ABB’s active period to conduct a presence survey. In these cases, FWS has other options—compensatory mitigation strategies—that project proponents may choose to use to compensate for the impact of their projects. We discuss compensatory mitigation in detail in the following section. FWS uses several compensatory mitigation strategies, such as in-lieu fee programs and conservation banks operated by third parties, to provide project proponents the option to compensate for remaining unavoidable impacts to endangered or threatened species after project proponents have implemented all appropriate and practicable avoidance and minimization measures, and FWS has used these strategies to conserve the ABB. In September 2016, FWS issued a draft policy that would set standards for all of its ESA compensatory mitigation strategies to achieve greater consistency, predictability, and transparency in the implementation of the law. FWS tracks information about its conservation banks but has not fully implemented its plan to track its use of in-lieu fee programs across regions and field offices. Since listing the ABB as endangered in 1989, FWS has used three in-lieu fee programs in several states and two conservation banks in Oklahoma to conserve the ABB. In addition to discussing and, in some cases, requiring measures to avoid and minimize potential adverse impacts to listed species, FWS may also discuss compensatory mitigation strategies with the project proponents so that they can compensate for any remaining unavoidable impacts on listed species from their projects after implementing all appropriate and practicable avoidance and minimization measures. As a result of these discussions, a project proponent may incorporate compensatory mitigation strategies into its project, and FWS may also require them in the nondiscretionary terms and conditions of incidental take statements included in biological opinions or in habitat conservation plans and incidental take permits, according to FWS officials. FWS uses several compensatory mitigation strategies, including conservation banks and in- lieu fee programs. These strategies may involve project proponents providing financial support for mitigation in other locations (i.e., outside the boundaries of the proposed project) to offset some or all of the project’s impacts. For conservation banks and in-lieu fee programs, project proponents provide money to a third party to conduct the conservation activities. Responsibility for conducting the conservation activities is transferred to the third party. In addition, FWS uses other strategies, including habitat credit exchanges, permittee-responsible mitigation, and other third-party mitigation (see table 3). FWS gives project proponents options to conduct mitigation on their own or through other arrangements, such as purchasing conservation bank credits or contributing to an in-lieu fee program, if those options exist and the project proponent is eligible under the terms and conditions of those arrangements. FWS officials said that when conservation banks or in-lieu fee programs have been available as mitigation options, nearly all project proponents choose to use these options over other compensatory mitigation strategies, such as permittee- responsible mitigation, because of issues of mitigation costs and liability. FWS has issued guidance to its regional offices for the establishment and use of conservation banks but has not finalized guidance that addresses operational considerations for in-lieu fee programs and other types of compensatory mitigation strategies. According to the guidance FWS issued in 2003 for establishing and using conservation banks as a compensatory mitigation strategy, conservation banks must conduct conservation for species in advance of any project development. Conservation banks should also obtain a permanent conservation easement on the mitigation lands, establish a management endowment that will support the perpetual management of the mitigation land, and establish a time limit for fully funding the endowment. Furthermore, according to this guidance, conservation bank proposals that are submitted for FWS approval must contain a conservation bank agreement that establishes a monitoring program, such as an annual reporting requirement, a long-term management plan, and a dispute resolution process to be used if the banks’ owners fail to meet their obligations. According to an FWS official, the lack of guidance addressing the establishment or use of in-lieu fee programs and other types of compensatory mitigation strategies has led to differences in the structure, monitoring, and oversight of these strategies across FWS. In September 2016, however, FWS issued a draft policy for public comment in the Federal Register on ESA compensatory mitigation strategies that covers all these compensatory mitigation strategies. This draft policy is intended to align with departmental directives and a 2015 presidential memorandum on mitigating impacts on natural resources from development. It establishes standards for compensatory mitigation and minimum criteria for achieving these standards. The draft policy stresses the need to hold all compensatory mitigation strategies to equivalent and effective standards, but it would not apply to mitigation arrangements that have already been approved unless the in-lieu fee program, conservation bank, or other arrangement is modified or amended. In addition, according to FWS’s website, the draft policy seeks to improve collaboration and coordination among all interested parties when FWS is engaged in the planning and implementation of compensatory mitigation strategies. Once finalized, the draft ESA compensatory mitigation policy would revise and replace FWS’s 2003 conservation banking guidance. According to FWS officials, the agency intends to finalize the policy after the public comment period ends in October 2016. FWS tracks key information about the conservation banks it approves, such as the location and credits available, but it does not track in-lieu fee programs. FWS has identified system modifications that are needed to the U.S. Army Corps of Engineers RIBITS database to track in-lieu fee programs, but it has not fully implemented its plan to make these modifications and improve monitoring and oversight of its in-lieu fee programs. FWS has posted information about the conservation banks it approves to the U.S. Army Corps of Engineers’ RIBITS website since 2011, according to an agency official. FWS monitors the number and location of conservation banks through the RIBITS database. According to FWS officials, the agency uses this information for a variety of management activities, such as providing project proponents with information on available mitigation options and facilitating incidental take authorizations and permit compliance when conservation banks are used. Table 4 shows the distribution of FWS-related conservation banks across regions. Information about the number of credits available and sold is accessible to the public through the RIBITS website. As of September 2016, there were also 21 conservation banks that had sold all of their credits and, therefore, were no longer a mitigation option for project proponents. In addition, FWS can suspend a conservation bank if the sponsors of that bank fail to comply with agreed upon parameters, such as how land will be managed for a species. As of September 2016, there were two suspended conservation banks that cannot sell credits at this time. According to FWS’s National Conservation Banking Coordinator, the agency’s regional and field offices do not consistently enter certain information about conservation banks into RIBITS. Specifically, that official stated that some offices do not consistently upload parts of the conservation bank instruments, such as financial assurances or annual monitoring reports. In addition some offices do not enter information on “pending” conservation banks—those close to receiving FWS approval— while others do. Furthermore, this official stated that the agency has not issued standard operating procedures on the information FWS field offices are to enter into RIBITS. This official acknowledged the need to improve how the agency’s regional and field offices enter data in RIBITS to make it more consistent in order to assist with monitoring and oversight of conservation banks. According to FWS officials, the agency intends to develop standard operating procedures after it finalizes its ESA compensatory mitigation policy. In contrast to conservation banks, according to FWS headquarters officials, the agency has not tracked the use of in-lieu fee programs across regions and field offices because its focus has been on the conservation banks. Without data on these programs, FWS may be unable to respond to inquiries from the public and private sectors, or to track administrative and ecological compliance by in-lieu fee program sponsors, among other things. In addition, FWS is limited in its ability to evaluate whether in-lieu fee programs are an effective strategy for conservation. However, FWS headquarters officials we interviewed said that FWS recognizes the need to track and monitor its in-lieu fee programs to provide better oversight. In February 2016, FWS signed an interagency agreement, which will be in effect for 5 years, with the U.S. Army Corps of Engineers to, among other things, modify its RIBITS database so FWS can track all in-lieu fee programs across regions and field offices. According to an FWS official, although making modifications in RIBITS to track in-lieu fee programs is an identified need, the agency has not obligated funds for these modifications and does not have a timeline for doing so. As a result, it is not clear when FWS will be able to use the RIBITS database to track its in-lieu fee programs. Federal government standards for internal control provide that management should design control activities to achieve objectives and respond to risks. To accomplish this, according to federal internal control standards, management should define the time frames for achieving the objectives. However, because it is unclear when modifications will be made to RIBITS, it is also unclear when regions and field offices will be able to enter information on in-lieu fee programs so that FWS can use the RIBITS database to track these programs. Until FWS establishes a timetable with milestones for modifying the RIBITS database to incorporate in-lieu fee program information, the agency will not have reasonable assurance that it will obtain relevant and reliable data on its in-lieu fee programs, which will impact its ability to effectively evaluate its in-lieu fee programs and determine the most effective strategy for conservation. Since listing the ABB as endangered in 1989, FWS has used three in-lieu programs to provide compensatory mitigation to help conserve the ABB in Nebraska, Oklahoma, and elsewhere in its Midwest Region. The Nebraska Ecological Services Field Office and FWS’s Midwest Regional Office began using in-lieu programs for the ABB in 2012 and 2013, respectively. Both of these programs were operating at the time of our review. The Oklahoma Ecological Services Field Office established an in- lieu fee program in 2009 but terminated the program in 2012. In January 2012, FWS’s Nebraska Ecological Services Field Office and the Nebraska Game and Parks Commission, a state agency, worked with two organizations—the Rainwater Basin Joint Venture and the Nebraska Community Foundation—to establish the Nebraska Habitat Projects Fund. The fund is an in-lieu fee program that uses funding it receives for conserving ABB and other species, such as migratory birds, and their habitats. In their discussions on mitigation options, FWS and project proponents may discuss making voluntary contributions to this fund as one option to mitigate potential adverse impacts on the ABB from projects in Nebraska. If FWS and the project proponent agree that making a contribution to the fund is an appropriate mitigation strategy, FWS and the project proponent prepare a written agreement that outlines what contributions will be made and for what purpose. These written agreements are then used to develop contracts between the project proponent and the Nebraska Community Foundation, which manages the funds. The Rainwater Basin Joint Venture works in partnership with the Nebraska Community Foundation to complete the planning, design, and implementation of conservation activities and to conduct research and monitoring activities. FWS does not have oversight responsibility for the Nebraska Habitat Projects Fund, and FWS does not determine how mitigation funds will be used for the ABB under this in-lieu fee program. However, FWS is a member of a work group composed of representatives from state agencies, such as the Nebraska Game and Parks Commission, and nongovernmental organizations, such as The Nature Conservancy and Audubon Nebraska, who work with an ABB species work group to establish criteria that are used to evaluate proposals for funding. According to FWS officials and Rainwater Basin Joint Venture documentation, in an effort to develop landscape-scale mitigation, a minimum threshold of $150,000 must be met before the Nebraska Community Foundation can expend funds for ABB conservation activities. As of October 2016, the $150,000 threshold had not been met, and therefore no expenditures have been made from the fund, according to FWS officials. In July 2013, FWS’s Midwest Regional Office entered into a memorandum of understanding with Enbridge Pipelines to implement conservation measures, including compensatory mitigation, to minimize or offset the impacts to the ABB and other species resulting from construction of the Flanagan South Pipeline, which runs through parts of Illinois, Missouri, Kansas, and Oklahoma. In July 2013, FWS also entered into a memorandum of agreement with The Conservation Fund, a nonprofit conservation organization, to manage the Enbridge Pipelines’ funds for species conservation and habitat restoration. The Conservation Fund is to use the funds to undertake mitigation projects approved by FWS or award the funds to others to undertake FWS-approved mitigation projects. Once FWS approves a project, The Conservation Fund is to enter into a funding agreement with those entities or undertake the project to conserve the ABB and other endangered species, such as the Indiana bat, as well as to protect migratory birds. Under this program, The Conservation Fund has provided funds to The Nature Conservancy, a nonprofit organization involved in ABB conservation efforts, to purchase land at its Tallgrass Prairie Preserve in Oklahoma to conserve ABB habitat. In February 2009, the Oklahoma Ecological Services Field Office entered into a memorandum of agreement with The Nature Conservancy creating an in-lieu fee program called the American Burying Beetle Conservation Fund. Under this program, funds from multiple sources, such as federal agencies and private companies, were to be used to acquire lands or easements within priority conservation areas, restore or manage potential ABB habitat, and support research to monitor conservation areas for the ABB. With FWS approval, The Nature Conservancy used contributions to the American Burying Beetle Conservation Fund to conduct conservation activities at its Tallgrass Prairie Preserve in Oklahoma, purchase land to expand the Preserve, and support ABB monitoring at the Preserve. According to representatives of The Nature Conservancy, it preserves the Tallgrass Prairie Preserve as a native tallgrass prairie habitat through the management of a bison herd that serves as a keystone species to restore the ecosystem. The Nature Conservancy manages the preserve to conserve all native species, but the representatives stated that the ABB benefits from this protected, heterogeneous grassland habitat. According to representatives from The Nature Conservancy, it used the American Burying Beetle Conservation Fund for various habitat management activities, including conducting prescribed burns to manage the grasslands and controlling invasive species, such as feral pigs. Representatives from The Nature Conservancy said that ABB populations have increased in both size and distribution across the preserve since ABBs were first found at the Tallgrass Prairie Preserve in 1999. Figure 4 shows a map of The Nature Conservancy’s Tallgrass Prairie Preserve, including land purchased with funding from FWS-approved in-lieu fee programs for the ABB. According to FWS officials, conservation banks offer a more consistent conservation effort for the ABB than the American Burying Beetle Conservation Fund because conservation bank sponsors must agree to both conduct conservation in advance of a project’s potential adverse effect on a species and conserve the land in perpetuity. However, conservation banks can take several years for bank sponsors to develop and for FWS to approve. FWS officials told us they used the American Burying Beetle Conservation Fund as an interim compensatory mitigation measure and that it contributed more to ABB conservation than previous minimization measures. The American Burying Beetle Conservation Fund operated from 2009 until July 2012, when FWS terminated it for various reasons, including concerns that the 2009 memorandum of agreement was not the appropriate mechanism to ensure effective oversight and adequate documentation of the conservation activities and fund expenditures. The total contributions and expenditures from the fund were about $1 million each. For additional information about the cash receipts and expenses of the American Burying Beetle Conservation Fund, see appendix III. The Oklahoma Ecological Services Field Office approved two conservation banks for the ABB in an effort to allow project proponents to compensate for the potential adverse impacts of their projects on the ABB and to provide long-term conservation for the species, according to agency officials. FWS approved the Muddy Boggy Conservation Bank in 2014. Bank representatives told us they began working with FWS to establish the bank in 2012, purchased the land in 2013, and received approval to sell credits in 2014. The Muddy Boggy Conservation Bank is approximately 3,300 acres. FWS also approved the American Burying Beetle Conservation Bank in 2014. This bank manages approximately 3,300 acres, including approximately 900 acres that it manages as part of a permittee-responsible mitigation arrangement. Customers for the banks include oil and gas companies and the Oklahoma Department of Transportation. Since they began operating in 2014, these conservation banks have submitted annual performance reports to FWS, and the agency has conducted annual on-site inspections, according to agency documentation. Representatives for both of the banks told us that they undertake activities to manage ABB habitat, rather than specifically managing for the species. For example, they said they use prescribed burns and control invasive species, such as eastern red cedar and red imported fire ants, which, if not managed, can reduce quality habitat for the ABB. According to FWS officials, the process of approving and establishing a conservation bank can take several years and requires up-front investments from bank sponsors. These officials told us FWS only reviews and approves applications and does not propose or initiate conservation banks. Conservation banks are private enterprises that are proposed by potential conservation bankers. Officials also told us that conservation banks are used in places where there is a strong demand for compensatory mitigation. FWS requires that offsets occur within a designated service area. Therefore, project proponents use conservation banks within the approved service area for the project or impact site. However, according to FWS officials, in Oklahoma, if there are no conservation banks in a project’s service area, project proponents can purchase credits at other conservation banks. As of November 2016, Oklahoma was the only state where conservation banks existed for the ABB. According to FWS officials, this is due to the market created by the relatively high density of the ABB population; large areas of suitable habitat; and numerous development projects, such as oil and gas well drilling and pipelines. According to representatives of the oil and gas industry in Oklahoma, companies prefer to purchase conservation bank credits rather than conduct their own permittee-responsible mitigation to avoid project delays and minimize long-term liabilities. However, representatives of the oil and gas industry we interviewed said they are concerned about the high costs of the conservation bank credits, which the conservation banks set on the basis of changing market conditions. Oil and gas industry representatives noted that from 2009 to 2012, FWS recommended contributions to the American Burying Beetle Conservation Fund that were significantly less than the current cost of credits from either of the two approved conservation banks. According to FWS officials, FWS has no control over the cost of conservation bank credits. Those costs are negotiated between the conservation bank and the purchasers. In contrast, the contributions to the American Burying Beetle Conservation Fund were based on expected survey costs, which were not always commensurate with the cost of mitigating impacts of a proposed project. According to oil and gas industry officials, the high cost of bank credits is one major reason they supported a petition to delist the ABB. According to conservation bank officials, the petition to delist the ABB has created uncertainty regarding the market for bank credits, since project proponents would no longer have a need to mitigate potential harm to the ABB to move forward with their projects if FWS delisted the ABB. FWS does not track information about its in-lieu fee programs across regions and field offices. As a result, FWS has limited ability to evaluate the effectiveness of these programs. FWS acknowledges this issue, and making modifications in RIBITS to track in-lieu fee programs is an identified need for the agency. However, it has not yet obligated funds to make the necessary modifications or established a timetable with milestones for modifying the RIBITS database to incorporate in-lieu fee program information. Until FWS collects relevant and reliable data on its in-lieu fee programs, the agency will not be able to evaluate the effectiveness of its programs and determine the most effective strategy for conservation. To help improve FWS’s ability to evaluate the effectiveness of its compensatory mitigation strategies and ensure that the agency appropriately plans the obligations necessary for this purpose, we recommend that the Director of the U.S. Fish and Wildlife Service establish a timetable with milestones for modifying the RIBITS database to incorporate FWS’s in-lieu fee program information. We provided a draft of this report for review and comment to the Department of the Interior. The GAO audit liaison for the Department of the Interior responded via e-mail that the U.S. Fish and Wildlife Service concurred with our recommendation. In addition, the agency provided technical comments on our draft report, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of the Interior, the Director of the U.S. Fish and Wildlife Service, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or fennella@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Our objectives were to examine (1) how the Department of the Interior’s U.S. Fish and Wildlife Service (FWS) has sought to avoid and minimize potential adverse impacts on the American burying beetle (ABB) from construction and other projects and (2) what is known about FWS’s compensatory mitigation strategies and how FWS has used two of these strategies, in-lieu fee programs and conservation banks, for the ABB. In addition, you asked us to review the contributions and disbursements for a specific in-lieu fee program for the ABB. We briefed your office on the results of that review on August 30, 2016 (see the briefing slides in app. III). To conduct our work, we reviewed and analyzed relevant laws, agency policies, guidance, and other documentation related to the Endangered Species Act (ESA), compensatory mitigation strategies, and conservation efforts for the ABB. We also reviewed our prior reports on endangered species issues and the use of compensatory mitigation strategies. We interviewed FWS officials from headquarters, the Office of Law Enforcement, and the regional offices and Ecological Services field offices in states with an ABB presence, including Region 2 and the Oklahoma Ecological Services Field Office; Region 3 and the Columbia and Ohio Ecological Services Field Offices; Region 4 and the Arkansas Ecological Services Field Office; Region 5 and the New England Ecological Services Field Office; and Region 6 and the Kansas, Nebraska, and South Dakota Ecological Services Field Offices. We also interviewed officials from other relevant federal agencies, including the Department of the Interior’s Bureau of Land Management, the U.S. Army Corps of Engineers, the Federal Energy Regulatory Commission, and the Environmental Protection Agency; representatives from The Nature Conservancy, a nonprofit conservation organization involved in ABB conservation efforts; as well as representatives from the oil and gas industry, including representatives from private oil and gas companies, the Oklahoma Oil and Gas Association, and the Oklahoma Independent Petroleum Association. To determine how FWS has sought to avoid and minimize potential adverse impacts on the ABB, we reviewed FWS biological opinions and other official correspondence with federal and nonfederal project proponents. In addition, we reviewed the draft ESA compensatory mitigation policy that FWS issued in September 2016. We also analyzed data from FWS’s Tracking and Integrated Logging System (TAILS) on the number of consultations with FWS that have occurred about the ABB across FWS regions for fiscal years 2008 through 2015. To assess the reliability of the data in TAILS, we reviewed agency documentation about TAILS and interviewed agency officials, discussing limitations related to how specific consultation types are reported. We determined that the TAILS data on formal consultations were sufficiently reliable for our purposes. To determine what is known about FWS’s compensatory mitigation strategies and how FWS has used two of these strategies, in-lieu fee programs and conservation banks, for the ABB, we reviewed agency documentation related to compensatory mitigation, including agency guidance and policies. We conducted a site visit in April 2016 at FWS’s Oklahoma Ecological Services Field Office, which is FWS’s lead field office for the ABB, and the Tallgrass Prairie Preserve in Oklahoma, where The Nature Conservancy conserved ABB habitat. We requested data from FWS regarding all current FWS in-lieu fee programs for endangered and threatened species, but we determined that the data FWS provided were not sufficiently reliable for our purposes because of missing information and other errors. For example, we determined that the data FWS provided included information on some in-lieu fee programs that had been terminated, included information on some compensatory mitigation strategies that are not in-lieu fee programs, and excluded at least one in- lieu fee program that is currently in operation. We also reviewed related documentation from FWS and other federal agencies, including the Bureau of Land Management, the U.S. Army Corps of Engineers, the Federal Energy Regulatory Commission, and the Environmental Protection Agency; the Oklahoma Department of Transportation; and conservation organizations involved in the in-lieu fee programs, such as The Nature Conservancy and The Conservation Fund. FWS officials said that when conservation banks or in-lieu fee programs have been available, nearly all project proponents choose these arrangements over other compensatory mitigation strategies. Therefore, we focused on in- lieu fee programs and conservation banks for this report. We interviewed representatives from the American Burying Beetle Conservation Bank and the Muddy Boggy Conservation Bank, both of which operate for the conservation of the ABB. We also analyzed FWS data on the use of conservation banks for all species listed under the ESA, which is reported in the U.S. Army Corps of Engineers’ Regulatory In-lieu fee and Bank Information Tracking System (RIBITS). To assess the reliability of data in RIBITS, we interviewed agency officials and reviewed agency documentation about RIBITS, such as user manuals, and determined that the data in RIBITS were sufficiently reliable for our purposes. We conducted this performance audit from November 2015 to December 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Department of the Interior’s U.S. Fish and Wildlife Service (FWS) has taken several steps to conserve and recover the American burying beetle (ABB) since it proposed listing the ABB as endangered in 1988. These steps range from developing a recovery plan for the ABB in 1991 to planning to determine, in 2017, whether FWS will (1) keep the ABB on the endangered species list; (2) reclassify the species’ status from endangered to threatened, also known as downlisting; or (3) delist the ABB. Figure 5 provides a timeline of key activities related to ABB conservation. As of October 2016, the ABB was known or believed to occur naturally in seven states: Arkansas, Kansas, Nebraska, Oklahoma, Rhode Island, South Dakota, and Texas. In addition, FWS has attempted to reintroduce the ABB into three states where it was found historically: Massachusetts, Missouri, and Ohio. Table 5 provides information about the current status of the ABB in states with known ABB presence, by FWS region and field office. To reintroduce the ABB to locations where it once occurred, FWS either breeds the species in captivity or transports ABB from locations with naturally occurring populations and releases them in other states. In conjunction with state and nonprofit partners, FWS began efforts to reintroduce the ABB to Nantucket Island in Massachusetts in 1994. FWS used captive-bred ABB for this effort and FWS officials said that they currently consider the Nantucket ABB population to be stable. In Ohio, FWS has attempted to reintroduce the ABB since 1998, when it began transporting naturally occurring ABB from Arkansas for release in Ohio. FWS officials said that their reintroduction efforts in Ohio have been unsuccessful, in part, because the ABBs from Arkansas that were used in the reintroduction program may not be adapted to conditions that occur further north in Ohio. FWS is now using ABB from Nebraska instead of Arkansas to test if these ABB are more accustomed to colder winters, according to FWS officials. In Missouri, FWS has transported and released a non-essential, experimental ABB population in the southwestern part of the state since 2012. FWS has documented an increasing number of ABB in Missouri each year since the reintroduction program began. FWS has not designated critical habitat for the ABB, in part, because the species is a habitat generalist, and it is still unknown what habitat type is essential for ABB conservation, according to FWS officials. FWS officials said that the agency improves its knowledge about the ABB’s current range when FWS biologists, researchers from universities and nonprofit organizations, project proponents, or others conduct surveys to detect or monitor the presence of ABBs in locations where they are known or believed to occur. Consequently, the ABB’s range in the United States changes over time. Figure 6 depicts the ABB’s range as of October 2016 in relation to its known range at the time of its listing in 1989 and its reported historical range. In addition to the contact named above, Jeffery D. Malcolm (Assistant Director), Maria C. Belaval, Martin (Greg) Campbell, Joseph M. Capuano, Caitlin E. Cusati, Armetha (Mae) Liles, Edward (Ned) Malone, Elizabeth Martinez, Genna Mastellone, Steven R. Putansu, Anne K. Rhodes-Kline, Dan C. Royer, Jeanette M. Soares, and David A. Watsula made important contributions to this report.
The ABB is a large scavenger insect that FWS listed as endangered in 1989 under the Endangered Species Act (ESA). FWS uses various strategies to address potential adverse impacts on protected species from construction and other projects. In some cases, FWS has required project proponents to take specific steps to avoid, minimize, or compensate for a project's potential impacts on the ABB or its habitat. When these proponents make financial contributions to compensate for the impacts of these projects, FWS generally refers to it as compensatory mitigation. GAO was asked to provide information on how FWS uses different compensatory mitigation strategies. This report examines: (1) how FWS has sought to avoid and minimize potential adverse impacts on the ABB from projects and (2) what is known about FWS's compensatory mitigation strategies and how FW has used two of them, in-lieu fee programs and conservation banks, for the ABB. GAO reviewed relevant laws, policies, guidance, and conservation efforts for the ABB; analyzed FWS data on ESA consultations and the use of conservation banks; and interviewed officials from FWS, project proponents, and organizations involved in ABB conservation. To address the potential adverse impacts of construction and other projects on the American burying beetle (ABB) and its habitat, the U.S. Fish and Wildlife Service (FWS), within the Department of the Interior, first focuses on avoidance and minimization approaches. For example, to avoid impacts on ABB habitat, FWS may suggest that project proponents—public and private entities—relocate the project or part of the project to another location. If complete avoidance is not possible, FWS may suggest ways to minimize the potential impacts, such as reducing soil disturbance during construction or limiting the use of pesticides. If avoidance and minimization actions are impractical or inadequate, then FWS may suggest compensatory mitigation strategies, which allow project proponents to choose to compensate for the potential adverse impacts of their projects. FWS uses several types of compensatory mitigation strategies, including (1) conservation banks, in which third parties invest up front in protected lands that are conserved and managed for a species, and then sell mitigation credits to project proponents, and (2) in-lieu fee programs, in which third parties generally collect money from several project proponents and conduct conservation activities for the species in a location away from the project site after the project's potential impacts have occurred. FWS has used two conservation banks in Oklahoma and three in-lieu fee programs in several states specifically to conserve the ABB. FWS tracks key information about its conservation banks, such as the location and mitigation credits available, and uses this information to help manage activities. However, FWS has not fully implemented its plan to track in-lieu fee programs. FWS signed an interagency agreement with the U.S. Army Corps of Engineers in February 2016 to modify its Regulatory In-lieu fee and Bank Information Tracking System (RIBITS) to enable FWS to track its in-lieu fee programs. However, FWS has not obligated funds for the necessary modifications or developed a timetable for doing so. Federal internal control standards provide that management should design control activities to achieve objectives and respond to risks. To accomplish this, federal internal control standards recommend that management define the time frames for how objectives will be achieved. Until FWS collects relevant and reliable data on its in-lieu fee programs, the agency will not be able to evaluate the effectiveness of its programs and determine the most effective strategy for conservation. To ensure that appropriate plans are made to obligate funds, GAO recommends that FWS establish a timetable with milestones for modifying RIBITS to incorporate FWS's in-lieu fee program information. FWS concurred with this recommendation.
In September 2012, we found that DHS employees reported having lower average morale than the average for the rest of the federal government, but morale varied across components and employee groups within the department. Specifically, we found that DHS employees as a whole reported lower satisfaction and engagement—the extent to which employees are immersed in their work and spending extra effort on job performance—than the rest of the federal government according to several measures. In particular, the 2011 FEVS showed that DHS employees had 4.5 percentage points lower job satisfaction and 7.0 percentage points lower engagement. Although DHS employees generally reported improvements in Job Satisfaction Index levels from 2006 to 2011 that narrowed the gap between DHS and the government average, employees continued to indicate less satisfaction than the government-wide average. For example, DHS employees reported satisfaction increased by 5 percentage points, from 59 percent in 2006 to 64 percent in 2011, but scores in both years were below the government- wide averages of 66 percent and 68 percent, respectively. As we reported in September 2012, the Partnership for Public Service analysis of FEVS data also indicated consistent levels of low employee satisfaction for DHS relative to those of other federal agencies. As with DHS’s 2011 ranking, 31st of 33 large federal agencies, the Partnership for Public Service ranked DHS 28th of 32 in 2010, 28th of 30 in 2009, and 29th of 30 in 2007 in the Best Places to Work ranking on overall scores for employee satisfaction and commitment. As we reported in September 2012, our analyses of 2011 FEVS results further indicated that average DHS-wide employee satisfaction and engagement scores were consistently lower when compared with average non-DHS employee scores in the same demographic groups, including supervisory status, pay, and agency tenure groups. For example, within most pay categories, DHS employees reported lower satisfaction and engagement than non-DHS employees in the same pay groups. In addition, we reported that DHS was not more likely than other agencies to employ the types of staff who tended to have lower morale across all agencies. Instead, employees in the various groups we analyzed had lower morale at DHS than the same types of employees at other agencies. We concluded that the gap between DHS and government-wide scores may be explained by factors unique to DHS, such as management practices and the nature of the agency’s work, or by differences among employees we could not analyze. In September 2012, we also found that levels of satisfaction and engagement varied across components, with some components reporting scores above the non-DHS averages. For example, employees from CBP and the Coast Guard were 1 and 1.5 percentage points more satisfied than the rest of the government, respectively, according to the 2011 FEVS Job Satisfaction Index. We further reported that several components with lower morale, such as TSA and ICE, made up a substantial share of FEVS respondents at DHS, and accounted for a significant portion of the overall difference between the department and other agencies. For example, survey respondents representing the approximately 55,000 employees at TSA and approximately 20,000 employees at ICE were on average 11.6 and 7.9 percentage points less satisfied than the rest of the government, respectively. Job satisfaction and engagement varied within components as well. For example, employees in TSA’s Federal Security Director staff reported higher satisfaction (by 13 percentage points) and engagement (by 14 percentage points) than TSA’s airport security screeners. Within CBP, Border Patrol employees were 8 percentage points more satisfied and 12 percentage points more engaged than CBP field operations employees.On the basis of our findings we concluded that given this variation across and within components, it was imperative that DHS understand and address employee morale problems through targeted actions that address employees’ underlying concerns. In our September 2012 report, we also found that DHS and the selected components had taken steps to determine the root causes of employee morale problems and implemented corrective actions, but that the department could strengthen its survey analyses and metrics for action plan success. To understand morale problems, DHS and selected components took steps, such as implementing an exit survey and routinely analyzing FEVS results. Components GAO selected for review—ICE, TSA, the Coast Guard, and CBP—conducted varying levels of analyses regarding the root causes of morale to understand leading issues that may relate to morale. DHS and the selected components planned actions to improve FEVS scores based on analyses of survey results, but we found that these efforts could be enhanced. Specifically, 2011 DHS-wide survey analyses did not include evaluations of demographic group differences on morale-related issues, the Coast Guard did not perform benchmarking analyses, and it was not evident from documentation the extent to which DHS and its components used root cause analyses in their action planning to address morale problems. As we reported in September 2012, without these elements, DHS risked not being able to address the underlying concerns of its varied employee population. We therefore recommended that DHS’s OCHCO and component human capital officials examine their root cause analysis efforts and, where absent, add the following: comparisons of demographic groups, benchmarking against similar organizations, and linkage of root cause findings to action plans. In addition, in September 2012, we found that despite having broad performance metrics in place to track and assess DHS employee morale on an agency-wide level, DHS did not have specific metrics within the action plans that were consistently clear and measurable. For example, one way the Coast Guard intended to address low-scoring FEVS topics was through improving employee training options, which it sought to measure by whether it developed e-learning courses for new employees. However, we found that this measure lacked key information that would make it more clear—namely, the course content or the specific training being provided—and did not list quantifiable or other measure values to determine when the goal had been reached, such as a target number of new employees who would receive training. As a result, we concluded that DHS’s ability to assess its efforts to address employee morale problems and determine if changes should be made to ensure progress toward achieving its goals was limited. To help address this concern, we recommended that DHS components establish metrics of success within their action plans that are clear and measurable. DHS concurred with our two recommendations and has taken steps since September 2012 to address them. However, as of December 2013, DHS has not yet fully implemented these recommendations. Enhancing root cause analysis: As of December 2013, DHS OCHCO had created a checklist for components to consult when creating action plans to address employee survey results. The checklist includes instructions to clearly identify the root cause associated with each action item and to indicate whether the action addresses the root cause. In addition, according to DHS OCHCO officials, OCHCO, CBP, ICE and TSA completed demographic analysis of the 2012 FEVS results, but were not certain of the extent to which other components had completed analyses. However, according to these officials, difficulties in identifying comparable organizations limited components’ benchmarking efforts. For example, while CBP identified a Canadian border security organization with which CBP officials intend to benchmark employee survey results, other DHS components did not find organizations, such as airport security organizations, against which to benchmark. OCHCO officials did not elaborate, however, on why it was difficult to find organizations against which to benchmark. We recognize that there can be some challenges associated with identifying organizations against which to benchmark. However, we continue to believe that DHS components could benefit from doing so as, according to the Partnership for Public Service, benchmarking agency survey results against those of similar organizations can provide a point of reference for improvements. DHS components and DHS-wide efforts have not yet fully examined their root cause analysis efforts and, where absent, added comparisons of demographic groups, benchmarking against similar organizations, and linkage of root cause findings to action plans, as we recommended in September 2012. Establishing metrics of success: OCHCO officials stated that, as of December 2013, they had directed component human capital officials to reevaluate their action plans to ensure that metrics of success were clear and measurable. However, in December 2013 we reviewed the 2013 action plans produced by the four DHS components we selected for our September 2012 report—ICE, CBP, TSA, and the Coast Guard—and found that their measures of success did not contain clear and measurable targets. Of the 53 measures of success reviewed across the four components, 16 were unclear and 35 lacked measurable targets.and compelling direction for ICE, is to be implemented by creating a work group consisting of the top six leaders in the agency together with the heads of ICE’s policy and public affairs offices to create a clear and compelling mission and priorities to drive the agency’s efforts. To determine whether ICE succeeds in implementing this action item, ICE’s measures of success include: (1) agency creates a mission statement and priority that guide employee focus and behaviors; (2) ICE’s first several layers of leadership indicate full support for the hard choices the direction-setting causes; (3) test focus group results; and (4) pulse survey. However, it is not clear, for example, what the “test focus group results” and “pulse survey” measures of success are measuring, and there are no measurable targets against which to assess success. By ensuring that DHS and component action plans contain measures of success that are clear and include measurable targets, DHS can better position itself to determine if its action plans are effective. For example, one action item, to create a clear Despite DHS’s efforts, since publication of our September 2012 report, DHS employee morale has declined, and the gap between DHS and government-wide scores has widened in key areas. Specifically, FEVS fiscal year 2012 and 2013 survey results released since our 2012 report indicate that DHS employees continue to report lower average satisfaction than the average for the rest of the federal government. For example, as shown in figure 1, 2013 FEVS data show that DHS employee satisfaction decreased 7 percentage points since 2011, which is more than the government-wide decrease of 4 percentage points over that same period of time. As a result, DHS employee satisfaction in 2013 is 7 percentage points lower than the government-wide average, a difference not seen since 2006. Moreover, consistent with our reporting in September 2012, morale varied across components, as shown in table 1. For example, while the Federal Law Enforcement Training Center and U.S. Citizenship and Immigration Service scored above the government-wide average with respect to employee satisfaction, the TSA and the National Protection and Programs Directorate scored below the government-wide average. In addition, DHS has also consistently scored lower than the government- wide average on the FEVS Leadership and Knowledge Management Index, which indicates the extent to which employees hold their leadership in high regard, both overall and on specific facets of leadership. For example, the index includes questions such as whether leaders generate high levels of motivation and commitment in the workforce, and whether employees have a high level of respect for their organization’s senior leaders. From fiscal years 2006 through 2013, DHS scored lower than the government-wide average each year for which survey data are available. While government-wide scores for this index have declined 3 percentage points since 2011, DHS’s scores have decreased 5 percentage points, widening the gap between DHS and the government-wide average to 9 percentage points. See figure 2 for additional detail. In December 2013, DHS senior officials provided a recent analysis they performed of 2012 FEVS results that indicated DHS low morale issues may persist because of employee concerns about senior leadership and supervisors, among other things, such as whether their talents are being well-used. DHS’s analysis of the 2012 FEVS results identified survey questions that correlated most strongly with index measures, such as the Job Satisfaction and Employee Engagement indexes. As noted in DHS’s analysis, the evaluation assessed the correlations among survey items, but did not attempt to identify the root cause for the survey results. For example, DHS found that the survey question, “How satisfied are you with the policies and practices of your senior leaders?” was more strongly correlated with the Job Satisfaction Index. However, DHS did not do further research to determine the specific senior leader policies and practices that affected satisfaction or explain why this effect occurred. According to DHS senior officials, on the basis of the results of this analysis and the Acting Secretary of Homeland Security’s review of the 2013 FEVS results, the department plans to launch additional employee surveys to probe perspectives on departmental leadership. As we have previously reported, given the critical nature of DHS’s mission to protect the security and economy of our nation, it is important that DHS employees be satisfied with their jobs so that DHS can retain and attract the talent required to complete its work. Accordingly, it is important for DHS to continue efforts to understand the root causes behind employee survey results. In February 2012, we reported that DHS SES vacancy rates, while reaching a peak of 25 percent in 2006, had generally declined since that time—from 25 percent in fiscal year 2006 to 10 percent at the end of fiscal year 2011, as shown in figure 3. Since February 2012, DHS data indicate that SES vacancy percentages have remained relatively stable. In particular, according to DHS data, at the end of fiscal year 2012 the SES vacancy rate was approximately 9 percent, and approximately 11 percent at the end of fiscal year 2013. Although there is no generally agreed-upon standard for acceptable vacancy rates, to provide perspective, in our February 2012 report we compared DHS’s rates with those of other agencies subject to the Chief Financial Officers (CFO) Act of 1990, as amended. 2006 through 2010—the most recent year for which federal-wide vacancy-rate data were available at the time of our February 2012 report—DHS vacancy rates were at times statistically higher than those at other CFO Act agencies. For example, in fiscal year 2010, the DHS SES vacancy rate at the end of the year was 17 percent and ranged from a low of 8.4 percent to a high of 20.7 percent during the course of the year. This compares with an average vacancy rate across other CFO agencies of 9.0 percent at the end of fiscal year 2010. Further, as we reported in February 2012, vacancy rates varied widely across DHS components. For example, at the end of fiscal year 2011, 20 percent of SES positions at the Federal Emergency Management Agency (FEMA) and 19.5 percent of SES equivalent position at TSA were vacant, compared with 5 percent at the Coast Guard and zero percent at the U.S. Secret Service. Vacancy rates at components generally declined from 2006 through 2011. See 31 U.S.C. § 901 (identifying 24 agencies subject to requirements of the CFO Act). As of 2009, CFO Act agencies employed 98 percent of all federal employees. events like presidential transitions, and organizational factors such as reorganizations. We also found that in fiscal year 2010, DHS’s senior leadership attrition rate was 11.4 percent, and that from fiscal years 2006 through 2010, the most frequent separation types were retirements and resignations. DHS’s attrition rates were statistically higher than the average of other CFO agencies in 2006, 2007, and 2009, but not statistically different in 2008 and 2010. OCHCO officials told us in December 2013 that while they no longer identify increases in allocations or organizational factors as significant to SES vacancy rates, budgetary constraints can present challenges. For example, these officials stated that budgetary constraints make it difficult for the department to fund allocated positions. In addition, DHS data provided in December 2013 indicate that the number of vacant DHS political positions, including positions that do and do not require Senate confirmation, doubled from 13 in fiscal year 2012 to 26 in fiscal year 2013. From fiscal year 2012 to 2013, the total number of filled political positions decreased from 73 to 56. In addition, some political positions were filled temporarily through employees serving in “acting” positions. In particular, DHS data provided in December 2013 indicate that 3 of 13 vacated positions were filled with personnel in acting positions at the end of fiscal year 2012 and 10 of 26 positions were filled in this manner at the end of fiscal year 2013. DHS has efforts under way to enhance senior leadership training and hiring, but it is too early to assess their effectiveness at reducing vacancy rates. In February 2012, we reported that DHS had (1) implemented a simplified pilot hiring process aimed at attracting additional qualified applicants and planned to expand the method for all SES, and (2) implemented a centralized SES candidate development program aimed at providing a consistent approach to leadership training. According to DHS officials, as of December 2013, the pilot hiring process had been made available to all DHS components, but the department had not performed analysis to assess the process’ impact on hiring. In addition, officials stated that in 2013, the first class of SES candidates had completed the candidate development program; however, the program’s impact on leadership training could not yet be determined. Chairman McCaul, Ranking Member Thompson, and members of the committee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For questions about this statement, please contact David C. Maurer at (202) 512-9627 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Joseph P. Cruz (Assistant Director), Ben Atwater, Katherine Davis, Tracey King, Thomas Lombardi, Taylor Matheson, Jeff Tessin, Julia Vieweg, and Yee Wong. Key contributors for the previous work that this testimony is based on are listed in each product. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
DHS is the third-largest cabinet-level department in the federal government, with more than 240,000 employees situated throughout the nation. Employees engage in a broad range of jobs to support its missions, including aviation and border security, emergency response, cybersecurity, and critical infrastructure protection, among others. Since it began operations in 2003, DHS has faced challenges in implementing human capital functions, and its employees have reported having low job satisfaction. In addition, Congress has raised questions about DHS's ability to hire and retain senior executives. This testimony addresses (1) how DHS's employees' workforce satisfaction compares with that of other federal government employees and the extent to which DHS is taking steps to improve employee morale, and (2) vacancies in DHS senior leadership positions. This statement is based on products GAO issued in February 2012 and September 2012 and selected updates conducted in December 2013. GAO analyzed FEVS results and DHS vacancy data for fiscal years 2012 and 2013 and interviewed DHS officials. In September 2012, GAO reported that Department of Homeland Security (DHS) employees identified having lower average morale than the average for the rest of the federal government, but morale varied across components. Specifically, GAO found that, according to the Office of Personnel Management's 2011 Federal Employee Viewpoint Survey (FEVS), DHS employees had 4.5 percentage points lower job satisfaction and 7.0 percentage point lower engagement--the extent to which employees are immersed in their work and spending extra effort on job performance. Several components with lower morale, such as the Transportation Security Administration, made up a substantial share of FEVS respondents at DHS and accounted for a significant portion of the overall difference between the department and other agencies. In September 2012, GAO recommended that DHS take action to better determine the root cause of low employee morale, and where absent, add benchmarking against similar organizations, among other things. Since September 2012, DHS has taken a number of actions intended to improve employee morale, such as directing component human capital officials to reevaluate their action plans to ensure that metrics of success are clear and measurable. In December 2013, GAO found that DHS has actions underway to address GAO's recommendations but DHS has not fully implemented them. It will be important to do so, as DHS employee job satisfaction declined in fiscal years 2012 and 2013 FEVS results. Specifically, 2013 FEVS data show that DHS employee satisfaction decreased 7 percentage points since 2011, which is more than the government-wide decrease of 4 percentage points over the same time period. As a result, the gap between average DHS employee satisfaction and the government-wide average widened to 7 percentage points. DHS has also consistently scored lower than the government-wide average on the FEVS Leadership and Knowledge Management index, which indicates the extent to which employees hold their leadership in high regard. Since 2011, DHS's scores for this index have decreased 5 percentage points, widening the gap between the DHS average and the government-wide average to 9 percentage points. In February 2012, GAO reported that DHS Senior Executive Service (SES) vacancy rates, while reaching a peak of 25 percent in 2006, had generally declined, reaching 10 percent at the end of fiscal year 2011. GAO also reported that component officials identified a number of factors that may have contributed to component SES vacancy rates during that time period, including increases in SES allocations, events like presidential transitions, and organizational factors such as reorganizations. To help reduce SES vacancy rates, DHS has (1) implemented a simplified pilot hiring process aimed at attracting additional qualified applicants and planned to expand the method for all SES, and (2) implemented a centralized SES candidate development program aimed at providing a consistent approach to leadership training. As of December 2013, DHS had made the pilot process available to all components, but had not yet performed analysis of these efforts' effectiveness at reducing SES vacancy rates which, according to DHS data, have remained relatively steady since GAO's February 2012 report--11 percent at the end of fiscal year 2013. GAO has made recommendations in prior reports for DHS to strengthen its analysis of low employee morale, and identify clear and measurable metrics for action plan success. DHS concurred with these recommendations and has reported actions under way to address them. GAO provided a copy of new information in this statement to DHS for review. DHS confirmed the accuracy of this information.
In March 2014 and April 2015, we reported that CBP had made progress in deploying programs under the Arizona Border Surveillance Technology Plan, but that CBP could take additional action to strengthen its management of the Plan and the Plan’s various programs. The Plan’s seven acquisition programs include fixed and mobile surveillance systems, agent portable devices, and ground sensors. Its three-highest cost programs which represent 97 percent of the Plan’s estimated cost are the Integrated Fixed Tower (IFT), Remote Video Surveillance System (RVSS), and Mobile Surveillance Capability (MSC). In March 2014, we found that CBP had a schedule for each of the Plan’s seven programs, and that four of the programs would not meet their originally planned completion dates. We also found that some of the programs had experienced delays relative to their baseline schedules, as of March 2013. More recently, in our April 2015 assessment of DHS’s major acquisitions programs, we reported on the status of the IFT program in particular, noting that from March 2012 to September 2014, the program’s initial operational capability date had slipped from the end of September 2013 to the end of September 2015. CBP officials said that this slip occurred because the program released its request for proposals behind schedule, and then received more proposals than anticipated. The subsequent bid protest extended the slip. CBP officials said these delays contributed to the IFT’s full operational capability slip, but funding shortfalls are the major contributor to the delay. Originally, full operational capability was scheduled to occur by September 2015, but as of December 2014, it was scheduled for March 2022. The IFT program anticipated it would receive less than half the fiscal year 2015 funding it needed to remain on track, and it anticipated its funding plan would be reduced further in the future. As a result of this expected funding shortage, the program anticipated it would be able to deliver 24 of 52 planned IFT units, with the funding through 2020, and that it planned to deploy the IFT units to three of the six original Border Patrol Station areas of responsibility. Furthermore, the Chief of the Border Patrol had informed the program that 12 of the 28 remaining IFT units systems are not needed given changing threats. Further, with regard to schedules, scheduling best practices are summarized into four characteristics of reliable schedules— comprehensive, well constructed, credible, and controlled (i.e., schedules are periodically updated and progress is monitored). We assessed CBP’s schedules as of March 2013 for the three highest-cost programs and found in March 2014 that schedules for two of the programs at least partially met each characteristic (i.e., satisfied about half of the criterion), and the schedule for the other program at least minimally met each characteristic (i.e., satisfied a small portion of the criterion). For example, the schedule for the IFT program partially met the characteristic of being credible in that CBP had performed a schedule risk analysis for the program, but the risk analysis was not based on any connection between risks and specific activities. For the MSC program, the schedule minimally met the characteristic of being controlled in that it did not have valid baseline dates for activities or milestones by which CBP could track progress. We recommended that CBP ensure that scheduling best practices are applied to the IFT, RVSS, and MSC schedules. DHS concurred with the recommendation and stated that CBP planned to ensure that scheduling best practices would be applied as far as practical when updating the three programs’ schedules. In May 2015, CBP provided us a summary of its completed and planned milestones for the IFT, RVSS, and MSC programs. However, CBP has not provided us with a complete program schedule for the IFT, RVSS, and MSC, and, therefore, we cannot determine the extent to which the agency has followed best practices when updating the respective schedules. In March 2014, we also found that CBP had not developed an Integrated Master Schedule for the Plan in accordance with best practices. Rather, CBP had used separate schedules for each program to manage implementation of the Plan, as CBP officials stated that the Plan contains individual acquisition programs rather than integrated programs. However, collectively these programs are intended to provide CBP with a combination of surveillance capabilities to be used along the Arizona border with Mexico, and resources are shared among the programs. According to scheduling best practices, an Integrated Master Schedule is a critical management tool for complex systems that involve a number of different projects, such as the Plan, to allow managers to monitor all work activities, how long activities will take, and how the activities are related to one another. We concluded that developing and maintaining an integrated master schedule for the Plan could help provide CBP a comprehensive view of the Plan and help CBP better understand how schedule changes in each individual program could affect implementation of the overall plan. We recommended that CBP develop an integrated master schedule for the Plan. CBP did not concur with this recommendation and maintained that an integrated master schedule for the Plan in one file undermines the DHS-approved implementation strategy for the individual programs making up the Plan, and that the implementation of this recommendation would essentially create a large, aggregated program, and effectively create an aggregated “system of systems.” DHS further stated that a key element of the Plan has been the disaggregation of technology procurements. However, as we noted in the report, collectively these programs are intended to provide CBP with a combination of surveillance capabilities to be used along the Arizona border with Mexico. Moreover, while the programs themselves may be independent of one another, the Plan’s resources are being shared among the programs. We continue to believe that developing an integrated master schedule for the Plan is needed. Developing and maintaining an integrated master schedule for the Plan could allow CBP insight into current or programmed allocation of resources for all programs as opposed to attempting to resolve any resource constraints for each program individually. In addition, in March 2014, we reported that the life-cycle cost estimates for the Plan reflected some, but not all, best practices. Cost-estimating best practices are summarized into four characteristics—well documented, comprehensive, accurate, and credible. Our analysis of CBP’s estimate for the Plan and estimates completed at the time of our review for the two highest-cost programs—the IFT and RVSS programs— showed that these estimates at least partially met three of these characteristics: well documented, comprehensive, and accurate. In terms of being credible, these estimates had not been verified with independent cost estimates in accordance with best practices. We concluded that ensuring that scheduling best practices were applied to the programs’ schedules and verifying life-cycle cost estimates with independent estimates could help better ensure the reliability of the schedules and estimates, and we recommended that CBP verify the life-cycle cost estimates for the IFT and RVSS programs with independent cost estimates and reconcile any differences. DHS concurred with this recommendation, but stated that at this point it does not believe that there would be a benefit in expending funds to obtain independent cost estimates and that if the costs realized to date continue to hold, there may be no requirement or value added in conducting full-blown updates with independent cost estimates. We recognize the need to balance the cost and time to verify the life-cycle cost estimates with the benefits to be gained from verification with independent cost estimates. However, we continue to believe that independently verifying the life-cycle cost estimates for the IFT and RVSS programs and reconciling any differences, consistent with best practices, could help CBP better ensure the reliability of the estimates. As of May 2015, CBP officials stated that the agency plans to update the life-cycle cost estimates for the three of its highest-cost programs under the Plan, including IFT and RVSS, by the end of calendar year 2015. We reported in March 2014 that CBP identified the mission benefits of its surveillance technologies, as we recommended in November 2011. More specifically, CBP had identified mission benefits of surveillance technologies to be deployed under the Plan, such as improved situational awareness and agent safety. However, we also reported that the agency had not developed key attributes for performance metrics for all surveillance technology to be deployed as part of the Plan, as we recommended in November 2011. As of May 2015, CBP had identified a set of potential key attributes for performance metrics for all technologies to be deployed under the Plan; however, CBP officials stated that this set of measures was under review as the agency continues to refine the measures to better inform the nature of the contributions and impacts of surveillance technology on its border security mission. While CBP has yet to apply these measures, CBP established a time line for developing performance measures for each technology. CBP officials stated that by the end of fiscal year 2015, baselines for each performance measure will be developed, at which time the agency plans to begin using the data to evaluate the individual and collective contributions of specific technology assets deployed under the Plan. Moreover, CBP plans to establish a tool by the end of fiscal year 2016 that explains the qualitative and quantitative impacts of technology and tactical infrastructure on situational awareness in specific areas of the border environment. While these are positive steps, until CBP completes its efforts to fully develop and apply key attributes for performance metrics for all technologies to be deployed under the Plan, it will not be able to fully assess its progress in implementing the Plan and determine when mission benefits have been fully realized. Moreover, in March 2014, we found that CBP does not capture complete data on the contributions of these technologies, which in combination with other relevant performance metrics or indicators could be used to better determine the contributions of CBP’s surveillance technologies and inform resource allocation decisions. Although CBP has a field within its Enforcement Integrated Database for maintaining data on whether technological assets, such as SBInet surveillance towers, and nontechnological assets, such as canine teams, assisted or contributed to the apprehension of illegal entrants and seizure of drugs and other contraband, according to CBP officials, Border Patrol agents were not required to record these data. This limited CBP’s ability to collect, track, and analyze available data on asset assists to help monitor the contribution of surveillance technologies, including its SBInet system, to Border Patrol apprehensions and seizures and inform resource allocation decisions. We recommended that CBP require data on asset assists to be recorded and tracked within its database and that once these data were required to be recorded and tracked, analyze available data on apprehensions and technological assists, in combination with other relevant performance metrics or indicators, as appropriate, to determine the contribution of surveillance technologies to CBP’s border security efforts. CBP concurred with our recommendations and has taken steps to address it. In June 2014, in response to our recommendation, CBP issued guidance informing Border Patrol agents that the asset assist data field within its database was now a mandatory data field. Agents are required to enter any assisting surveillance technology or other equipment before proceeding. While this is a positive step, to fully address our recommendations, CBP needs to analyze data on apprehensions and seizures, in combination with other relevant performance metrics, to determine the contribution of surveillance technologies to its border security mission. In addition, with regard to fencing and tactical infrastructure, CBP reported that from fiscal year 2005 through May 2015, the total miles of vehicle and pedestrian fencing along 2,000-mile U.S.-Mexico border increased from approximately 120 miles to 652 miles. With the completion of the new fencing and other tactical infrastructure, DHS is now responsible for maintaining this infrastructure including repairing breached sections of fencing which cost the department at least $7.2 million in 2010, as reported by CBP. Moreover, we have previously reported on CBP’s efforts to assess the impact of fencing and tactical infrastructure on border security. Specifically, in our May 2010 and September 2009 reports, we found that CBP had not accounted for the impact of its investment in border fencing and infrastructure on border security. CBP had reported an increase in control of southwest border miles, but could not account separately for the impact of the border fencing and other infrastructure. In September 2009, we recommended that CBP determine the contribution of border fencing and other infrastructure to border security. DHS concurred with our recommendation, and in response, CBP contracted with the Homeland Security Studies and Analysis Institute to conduct an analysis of the impact of tactical infrastructure on border security. To effectively carry out their respective border security missions, CBP and ICE agents and officers require interoperable communications—the capability of different electronic communications systems to readily connect with one another to enable timely communications—with one another and with state and local agencies, as we reported in March 2015. In 2008, DHS components, including CBP and ICE, initiated individual TACCOM modernization programs to upgrade radio systems that were past expected service life to improve the performance of these systems and to help achieve interoperability across federal, state, and local agencies that are responsible for securing the border. In March 2015, we reported that from 2009 through 2013, CBP completed full modernization projects in 4 of the 9 sectors that constitute the southwest border. In these 4 sectors, Yuma, Tucson, Rio Grande Valley, and El Paso, CBP has (1) upgraded outdated analog tactical communications equipment and infrastructure to digital systems and (2) expanded coverage and provided capacity enhancements by procuring additional equipment and building out new tower sites in areas where CBP agents operate that were not previously covered with existing infrastructure. In 2009, CBP also revised its modernization approach for all remaining sectors, halting the addition of any new tower sites, and adding a project known as Digital in Place (DIP) as a capstone to this program. The scope of the DIP project entails one-for-one replacements of analog systems with digital systems and does not provide additional coverage or capacity enhancements. CBP plans to implement DIP in the remaining 5 sectors along the southwest border that did not receive full modernization upgrades. As of May 2015, DIP projects had been completed in 3 of the 5 sectors along the southwest border—Big Bend, Laredo, and Del Rio— and were under way in other locations across the nation.CBP, because DIP does not include new site build-outs, among other things, this approach will greatly reduce the costs associated with the full modernization approach and is expected to be completed in a relatively shorter time period. GAO-15-201. information could help CBP better identify any challenges with use of the system and assess system performance. For example, although CBP collects information on radio system availability and maintenance, CBP officials stated that they have not used this information to assess overall system performance to determine the extent to which upgraded radio systems are meeting user needs or to identify areas in need of corrective action. According to CBP officials, the agency had not yet analyzed available data to determine the extent to which upgraded radio systems are meeting user needs or to identify areas in need of corrective action because complete operational data have not been collected for all sites to which radio systems were deployed and because these data are maintained across different repositories that are not currently linked together. CBP officials recognized the need to collect sufficient data to monitor radio system performance and at the time of our report, stated that the agency was taking steps to address this need by collecting data in recently modernized sites. They further stated that once the data had been collected, the agency planned to consolidate these data in a central repository. Moreover, in March 2015 we found that most of the groups of CBP radio users we met with reported experiencing challenges relating to operational performance. For example, 7 of the 10 groups of CBP radio users we met with in the Tucson, Rio Grande Valley, and El Paso sectors stated that coverage gaps continued to affect their ability to communicate, even after the upgrades were completed. Specifically, 2 groups stated that coverage in some areas seemed to be worse after the upgrades were completed, 4 groups stated that coverage gaps had been reduced but continued to exist after the upgrades, and 1 group stated that while coverage had improved in some areas, the group did not receive the coverage enhancements it expected to receive, especially in critical areas. We recommended in March 2015 that CBP develop a plan to monitor the performance of its deployed radio systems. DHS concurred with this recommendation and stated that it will work to complete a CBP Land Mobile Radio System Performance Monitoring Plan by December 31, 2015. We also found in March 2015 that ICE does not have complete information to effectively manage its TACCOM modernization program. Specifically, we reported that ICE has 58 completed, ongoing, or planned projects under its TACCOM modernization program and has taken some actions to modernize its TACCOM radio systems, including along the southwest border. Specifically, according to ICE officials, the agency has replaced individual analog TACCOM radios and equipment with digital systems across all 26 ICE regions, including the southwest border regions. In addition, while ICE has completed full modernization projects—which entail expanding coverage and capacity by building new sites—in other regions across the United States, it had not developed plans to modernize any southwest border regions. Instead, to meet the needs of ICE radio users in the southwest border regions, ICE officials stated that the agency’s strategy focused on leveraging other agency infrastructure in areas where ICE does not have infrastructure until funding is approved to initiate modernization projects in these regions. For example, in Yuma and Tucson, ICE officials stated that the agency primarily uses CBP’s radio system. Further, we found that while ICE has developed some documentation for the individual projects, such as individual project plans, and provided us with an integrated master schedule for the 58 ongoing, planned, and completed projects, the agency had not documented an overall plan to manage its TACCOM modernization program and provide oversight across all projects. For example, ICE officials were unable to provide documentation that all TACCOM equipment had been upgraded to digital systems. Additionally, our interviews with groups of ICE radio users showed that agency efforts to upgrade its TACCOM technology— including leveraging other agency infrastructure in areas where ICE does not have infrastructure—may not be supporting ICE radio user needs along the southwest border. For example, 2 of the 3 groups of ICE radio users we met with in Tucson, Rio Grande Valley, and El Paso that operate on CBP land-mobile radio networks stated that coverage was worse after the upgrades or did not meet ICE radio user needs because the new system did not provide the capabilities the agency promised to deliver. The third group stated that CBP’s modernization project upgrades enhanced coverage in a limited capacity but created new challenges for ICE because of the increase in communication traffic. Specifically, ICE radio users in this location stated that since they are using CBP channels, Border Patrol has priority of use, so when there is too much traffic on a channel, ICE radio users are unable to access the channel or get kicked off the system and hear a busy signal when attempting to use their radios. All 4 groups of ICE radio users we met with stated that operability and interoperability challenges frequently compromised their investigations and resulted in unacceptable risks to officer safety. We reported that ICE officials agreed that ICE radio user coverage needs had not been met in the southwest border areas and at the time of our report stated that the agency was taking steps to assess radio user needs in these locations. Specifically, ICE officials stated that they were soliciting information from radio users on their operational needs and briefing ICE management to inform future decisions about ICE coverage and funding needs. However, at that time ICE officials also stated that there were no plans for creating a program plan to guide and document these efforts. We recommended that ICE develop a program plan to ensure that the agency establishes the appropriate documentation of resource needs, program goals, and measures to monitor the performance of its deployed radio systems. DHS concurred with this recommendation. In response to our recommendation, DHS stated that ICE’s Office of the Chief Information Officer will develop a program to facilitate, coordinate, and maintain ICE’s deployed radio systems, and will ensure that the agency establishes the proper documentation of resource needs, defines program goals, and establishes measures to monitor performance by January 31, 2016. We also concluded in March 2015 that CBP and ICE could do more to ensure the agencies are meeting the training needs of all CBP and ICE radio users. We reported that CBP provided training to its agents and officers on upgraded radio systems in each southwest border location that received upgrades. However, 8 of 14 CBP radio user groups we met with suggested that radio users be provided with additional radio training to enhance their proficiency in using radio systems. Further, we found that CBP does not know how many radio users are in need of training. We recommended in March 2015 that CBP (1) develop and implement a plan to address any skills gaps for CBP agents and officers related to understanding the new digital radio systems and interagency radio use protocols, and (2) develop a mechanism to verify that all Border Patrol and Office of Field Operations radio users receive radio training. DHS concurred with these recommendations and estimated a completion date of March 31, 2016. We also found that ICE provided training on the upgraded radio systems in one location, but 3 of the 4 ICE radio user groups we met with in field locations stated that additional training would help address challenges experienced by radio users. Further, ICE officials stated that they did not track the training that the agency provided. We recommended in March 2015 that ICE (1) develop and implement a plan to address any skills gaps for ICE agents related to understanding the new digital radio systems and interagency radio use protocols, and (2) develop a mechanism to verify that all ICE radio users receive radio training. DHS concurred with these recommendations. In response to these recommendations, DHS stated that ICE will propose an increase in training for new agents and will develop a mechanism to verify that all ICE radio users receive radio training by March 31, 2016. Our March 2012 report on OAM assets highlighted several areas the agency could address to better ensure the mix and placement of assets is effective and efficient. These areas included: (1) documentation clearly linking deployment decisions to mission needs and threats, (2) documentation on the assessments and analysis used to support decisions on the mix and placement of assets, and (3) consideration of how deployment of border technology will affect customer requirements for air and marine assets across locations. Specifically, our March 2012 report found that OAM had not documented significant events, such as its analyses to support its asset mix and placement across locations, and as a result, lacked a record to help demonstrate that its decisions to allocate assets were the most effective ones in fulfilling customer needs and addressing threats, among other things. While OAM’s Fiscal Year 2010 Aircraft Deployment Plan stated that OAM deployed aircraft and maritime vessels to ensure its forces were positioned to best meet the needs of CBP field commanders and respond to the latest intelligence on emerging threats, OAM did not have documentation that clearly linked the deployment decisions in the plan to mission needs or threats. We also found that OAM did not provide higher rates of support to locations Border Patrol identified as high priority, a fact that indicated that a reassessment of OAM’s resource mix and placement could help ensure that it meets mission needs, addresses threats, and mitigates risk. OAM officials stated that while they deployed a majority of assets to high-priority sectors, budgetary constraints, other national priorities, and the need to maintain presence across border locations limited overall increases in assets or the amount of assets they could redeploy from lower-priority sectors. While we recognized OAM’s resource constraints, the agency did not have documentation of analyses assessing the impact of these constraints and whether actions could be taken to improve the mix and placement of assets within them. Thus, the extent to which the deployment of OAM assets and personnel, including those assigned to the southwest border, most effectively utilized OAM’s constrained assets to meet mission needs and address threats was unclear. We also found in March 2012 that OAM did not document assessments and analyses to support the agency’s decisions on the mix and placement of assets. DHS’s 2005 aviation management directive requires operating entities to use their aircraft in the most cost-effective way to meet requirements. Although OAM officials stated that it factored cost- effectiveness considerations, such as efforts to move similar types of aircraft to the same locations to help reduce maintenance and training costs into its deployment decisions, OAM did not have documentation of analyses it performed to make these decisions. OAM headquarters officials stated that they made deployment decisions during formal discussions and ongoing meetings in close collaboration with Border Patrol, and considered a range of factors such as operational capability, mission priorities, and threats. OAM officials said that while they generally documented final decisions affecting the mix and placement of assets, they did not document assessments and analyses to support these decisions. In addition, we reported that CBP and DHS had ongoing interagency efforts under way to increase air and marine domain awareness across U.S. borders through deployment of technology that may decrease Border Patrol’s use of OAM assets for air and marine domain awareness. However, at the time of our review, OAM was not planning to assess how technology capabilities could affect the mix and placement of air and marine assets until the technology has been deployed. Specifically, we concluded that Border Patrol, CBP, and DHS had strategic and technological initiatives under way that would likely affect customer requirements for air and marine support and the mix and placement of assets across locations. CBP and DHS also had ongoing interagency efforts under way to increase air and marine domain awareness across U.S. borders through deployment of technology that may decrease Border Patrol’s use of OAM assets for air and marine domain awareness. OAM officials stated that they would consider how technology capabilities affect the mix and placement of air and marine assets once such technology has been deployed. To address the findings of our March 2012 report, we recommended that CBP, to the extent that benefits outweigh the costs, reassess the mix and placement of OAM’s air and marine assets to include mission requirements, performance results, and anticipated CBP strategic and technological changes. DHS concurred with this recommendation and responded that it planned to address some of these actions as part of the Fiscal Year 2012-2013 Aircraft Deployment Plan. In September 2014, CBP provided this Plan, approved in May 2012, and updated information on its subsequent efforts to address this recommendation, including a description of actions taken to reassess the mix and placement of OAM’s assets. In particular, CBP noted that in late 2012, it initiated some actions based on its analysis of CBP data and assessment of OAM statistical information, such as the priority for flight hours by location based on Border Patrol and OAM data on arrests; apprehensions; and seizures of cocaine, marijuana, currency, weapons, vehicles, aircraft, and vessels. According to OAM, after consulting with DHS and CBP officials and approval from the DHS Secretary in May 2013, the office began a realignment of personnel, aircraft, and vessels from the northern border to the southern border based on its evaluation of the utilization and efficiency of current assets and available funding to accomplish the transfers. CBP’s actions are a positive step to more effectively allocating scarce assets. As of April 2015, OAM officials said that they were in the process of providing GAO with the data and analysis used to support this realignment of assets in order to fully document implementation of the recommendation. Chairman Johnson, Ranking Member Carper, and members of the committee, this concludes my prepared statement. I will be happy to answer any questions you may have. For further information about this testimony, please contact Rebecca Gambler at (202) 512-8777 or gamblerr@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement included Kirk Kiester (Assistant Director), as well as Carissa Bryant, Adam Gomez, Yvette Gutierrez, Jon Najmi, Meg Ullengren, and Michelle Woods. Other contributors to the work on which this statement is based included Cindy Ayers, Jeanette Espinola, and Nancy Kawahara. Homeland Security Acquisitions: Major Program Assessments Reveal Actions Needed to Improve Accountability. GAO-15-171SP. Washington, D.C.: April 22, 2015. 2015 Annual Report: Additional Opportunities to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-15-404SP. (Washington, D.C.: April 14, 2015). Border Security: Additional Efforts Needed to Address Persistent Challenges in Achieving Radio Interoperability. GAO-15-201. Washington, D.C.: March 23, 2015. Arizona Border Surveillance Technology Plan: Additional Actions Needed to Strengthen Management and Assess Effectiveness. GAO-14-411T. Washington, D.C.: March 12, 2014. Arizona Border Surveillance Technology Plan: Additional Actions Needed to Strengthen Management and Assess Effectiveness. GAO-14-368. (Washington, D.C.: March 3, 2014). Border Security: Progress and Challenges in DHS Implementation and Assessment Efforts. GAO-13-653T. Washington, D.C.: June 27, 2013. Border Security: DHS’s Progress and Challenges in Securing U.S. Borders. GAO-13-414T. Washington, D.C.: March 14, 2013. GAO Schedule Assessment Guide: Best Practices for Project Schedules. GAO-12-120G (exposure draft). Washington, D.C.: May 2012. Border Security: Opportunities Exist to Ensure More Effective Use of DHS’s Air and Marine Assets. GAO-12-518. Washington, D.C.: March 30, 2012. U.S. Customs and Border Protection’s Border Security Fencing, Infrastructure and Technology Fiscal Year 2011 Expenditure Plan. GAO-12-106R. Washington, D.C.: November 17, 2011. Arizona Border Surveillance Technology: More Information on Plans and Costs Is Needed before Proceeding. GAO-12-22. (Washington, D.C.: November 4, 2011). Secure Border Initiative: Technology Deployment Delays Persist and the Impact of Border Fencing Has Not Been Assessed. GAO-09-896. (Washington, D.C.: September 9, 2009). GAO Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs. GAO-09-3SP. (Washington, D.C.: March 2009). This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
DHS has employed a variety of technology, infrastructure, and other assets to help secure the border. For example, in January 2011, CBP developed the Arizona Border Surveillance Technology Plan, which includes seven acquisition programs related to fixed and mobile surveillance systems, agent-portable devices, and ground sensors. CBP has also deployed tactical infrastructure--fencing, roads, and lights--and tactical communications (radio systems) and uses air and marine assets to secure the border. In recent years, GAO has reported on a variety of DHS border security programs and operations. This statement addresses some of the key issues and recommendations GAO has made in the following areas: (1) DHS's efforts to implement the Arizona Border Surveillance Technology Plan and deploy tactical infrastructure, (2) CBP's and ICE's efforts to modernize radio systems, and (3) OAM mix and placement of assets. This statement is based on prior products GAO issued from September 2009 through April 2015, along with selected updates conducted in April and May 2015 to obtain information from DHS on actions it has taken to address prior GAO recommendations. GAO reported in March 2014 that U.S. Customs and Border Protection (CBP), within the Department of Homeland Security (DHS), had made progress in deploying programs under the Arizona Border Surveillance Technology Plan (the Plan), but that CBP could strengthen its management and assessment of the Plan's programs. Specifically, GAO reported that CBP's schedules and life-cycle cost estimates for the Plan and its three highest-cost programs met some but not all best practices and recommended that CBP ensure that its schedules and estimates more fully address best practices, such as validating its cost estimates with independent estimates. CBP concurred and is taking steps toward addressing GAO's recommendations, such as planning to update cost estimates by the end of calendar year 2015. Further, in March 2014, GAO reported that while CBP had identified mission benefits of technologies to be deployed under the Plan, such as improved situational awareness, the agency had not developed key attributes for performance metrics for all technologies, as GAO recommended. In April 2015, GAO reported that CBP had identified a set of potential key attributes for performance metrics for deployed technologies and CBP officials stated that by the end of fiscal year 2015, baselines for each performance measure will be developed and the agency will begin using the data to evaluate the contributions of specific technology assets. In March 2015, GAO reported that DHS, CBP, and U.S. Immigration and Customs Enforcement (ICE) had taken steps to upgrade tactical communications equipment and infrastructure, such as completing full modernization projects in four of the nine southwest border sectors, but could benefit by developing performance and program plans. Since rolling out upgrades--which include replacing and updating equipment and expanding infrastructure--CBP had not established an ongoing performance monitoring plan to determine whether the systems were working as intended. CBP agreed to develop such a plan, as GAO recommended, and is working to complete the plan by the end of 2015. Further, GAO reported in March 2015 that ICE did not have a program plan to manage its portfolio of modernization projects. DHS concurred with GAO's recommendation to develop a plan and stated that ICE will develop a program to facilitate, coordinate, and maintain ICE's radio systems, and document resource needs, define program goals, and establish performance measures by January 2016. In March 2012, GAO reported that the Office of Air and Marine (OAM) within CBP could benefit from reassessing its mix and placement of assets to better address mission needs and threats. GAO reported that OAM should clearly document the linkage of deployment decisions to mission needs and threat and its analysis and assessments used to support its decisions on the mix and placement of assets. GAO also reported that OAM could consider how border technology deployment will affect customer requirements for OAM assets. GAO recommended that CBP reassess the mix and placement of OAM's assets to include mission requirements, among other things. CBP concurred, and after May 2013, OAM began a realignment of personnel, aircraft, and vessels from the northern border to the southern border based on its evaluation of the utilization and efficiency of current assets and available funding to accomplish the transfers. In April 2015, OAM officials stated that they are working to provide GAO with the data and analysis used to support the realignment of assets. In its prior work, GAO made recommendations to DHS to strengthen its management of plans and programs, tactical communications, and mix and placement of OAM assets. DHS generally agreed and plans to address the recommendations. Consequently, GAO is not making any new recommendations in this testimony.
The National Defense Stockpile is a reserve of strategic and critical materials that may be unavailable in the United States in sufficient quantities to meet unanticipated national security requirements. The Defense Logistics Agency’s Defense National Stockpile Center (DNSC) has managed the stockpile since 1988. Zinc is one of 92 strategic and critical materials stored in the stockpile. It is commonly used for galvanizing, die-casting, manufacturing brass and bronze, and making the U.S. penny. It is produced in various grades—special high grade, high grade, continuous galvanizing, controlled lead, and prime western—that are distinguishable by the amount of impurities they contain, such as lead, cadmium, and iron. Special high grade is the most pure, prime western the least. As of March 30, 1996, DNSC has nearly 300,000 tons of slab zinc, valued at $300 million, stored at 15 facilities in 9 states. (See app. I.) About 91 percent is either high grade (48 percent) or prime western grade (43 percent). “to the maximum extent feasible . . . efforts shall be made . . . to avoid undue disruption of the usual markets of producers, processors, and consumers of such materials and to protect the United States against avoidable loss.” DNSC has been authorized to sell up to 50,000 tons of zinc in fiscal year 1996 and 50,000 tons in fiscal year 1997. It is conducting monthly sales using sealed bidding procedures. Bids for a minimum of 20 tons are accepted from producers, processors, traders, and consumers on an “as-is, where-is” basis. Between 1993 and March 1996, DNSC sold approximately 77,000 tons of zinc for about $60 million. DNSC’s plans, as provided to the Congress, indicate that, if authorized, it intends to sell up to 50,000 tons annually until the inventory is depleted. Money generated from sales is put into the National Defense Stockpile Transaction Fund and used for stockpile operations or, as authorized and appropriated by the Congress, for other defense purposes. When evaluating the potential for undue market disruption, DNSC and the Market Impact Committee consider the usual market for zinc to be the total U.S. market for all grades of the commodity. AZA contends, however, that the statute requires an evaluation based only on the markets for the grades of zinc the stockpile plans to sell. We find that the statute does not specify the market the government is to examine and that the government’s determination to consider the entire zinc market has a sound basis. The Stock Piling Act authorizes the acquisition, management, and disposal of “strategic and critical materials” and requires efforts by the stockpile managers, to the maximum extent feasible, “to avoid undue disruption of the usual markets of producers, processors, and consumers of such materials.” AZA argues that the phrase “such materials” refers only to the specific grades of zinc being disposed of from the stockpile and that the phrase “usual markets” refers only to producers, processors, and consumers of those specific grades. The government, on the other hand, believes that “material” refers to the commodity of zinc, regardless of grades; therefore, the usual markets to which the statute refers means the total market for the commodity, not just the markets for the specific grades being sold from the stockpile. Although it is clear from the Stock Piling Act that the phrase such materials refers to the strategic and critical materials disposed of under the act, the statute does not require a market analysis based on specific grades of stockpile commodities. In addition, while the act requires efforts to avoid undue disruption of the usual markets for materials sold from the stockpile, it does not define the phrase usual markets or otherwise specify what markets the government is to examine to determine whether stockpile sales could be unduly disruptive. Furthermore, while it is clear from the act’s legislative history that the Congress was concerned with the market effects of stockpile sales, there is no indication that the Congress envisioned an evaluation at any particular market level. Generally, without a statutory definition or clear indication of congressional intent, an agency charged with implementing a statute has the discretion to define a phrase such as usual markets. The courts have said that an agency’s determination in such circumstances will not be overturned, provided it has a reasonable basis. We believe the determination by DNSC and the Market Impact Committee concerning the usual markets for zinc has a sound basis. According to DNSC officials, their determinations are based on the practices for each industry and commodity. Some commodities consist of grades that have separate industry uses and generally cannot be substituted for one another, according to DNSC. For example, the mineral fluorspar, another stockpile material being disposed of, is divided into grades having distinct end uses—a metallurgical grade used in the manufacture of certain metals and an acid grade used by the glass industry. In contrast, in some cases, different grades of zinc may be used for the same purpose, such as certain types of galvanizing. Annual legislation authorizing sales from the stockpile reflect these differences between commodities. Disposals of certain commodities, such as zinc and lead, are authorized on a generic basis; authorization for disposing of other commodities, such as fluorspar, is given by separate grades and amounts. DNSC and the Market Impact Committee’s view of the zinc market as an entire market is a long-standing one shared by previous managers of the stockpile. Specifically, the General Services Administration and the Federal Emergency Management Agency, both prior managers of the stockpile, have defined the usual market for zinc as the entire market. Our discussions with zinc market participants—that is, companies producing or processing zinc, those buying and selling zinc as traders or brokers, those that consume zinc in their manufacturing processes, and individuals who study or report on the zinc markets—support this view of the larger market. Some of these discussions were with AZA members. The consensus was that some zinc consumers adjust their purchases of different grades of zinc according to changing market factors. Some producers adjust their production of different grades according to supply and demand for each grade. According to the participants, the impact of market events, such as an increased supply because of stockpile sales, could affect not only the market of the particular grade sold, but also the overall market because a significant decline in the price of one grade would be expected to depress the prices of other grades. Pricing data we reviewed show that prices of different grades tend to follow similar patterns. Although some zinc consumers may not purchase materials sold from the stockpile, we do not believe that the Stock Piling Act requires the government to limit its review of the usual markets to only those consumers likely to buy zinc from the stockpile. According to DNSC, a company may not buy stockpile zinc for a number of reasons. For example, even if a company could use the grade of zinc being sold, the material may not be available in sufficient quantity or quality, or at low enough prices, to justify changing suppliers. Even though such a company may not buy zinc from the stockpile, that company could be affected by the increase in supply resulting from stockpile sales. The government recognizes that sales from the stockpile can affect some participants in the market more than others. Stockpile sales increase supplies that can drive down prices and cause a particular producer or processor to lose business. The stockpile is in effect an additional zinc producer. One major U.S. zinc producer, for example, produces only one grade of zinc, which is one of those DNSC has offered for sale. This producer stated that it had lost sales because of the stockpile sales. However, the Market Impact Committee stated that the loss of business by one producer, in and of itself, does not necessarily unduly disrupt the overall market. Some customers taking advantage of lower prices from a new supplier is a normal commercial activity. One factor that may limit the impact of stockpile sales on U.S. zinc producers is the international character of the zinc market. Zinc is an internationally traded commodity. In 1994, the latest year for which data were available, U.S. zinc consumption (all grades) was about 17 percent of the world’s consumption, and the United States had to rely on imports for about 67 percent of the 1.2 million tons of slab zinc consumed. According to zinc market participants and analysts, although prices and market conditions for zinc can differ by country, international trade tends to spread the effects of changing market conditions across countries. For example, if U.S. prices fell, then suppliers would decrease their sales to the U.S. market and increase their sales to other markets, thus distributing the price effects to those other markets. DNSC has established policies and procedures to avoid unduly disrupting the zinc markets. Specifically, it has publicized its sales and price policy and solicited public comments; sold less zinc than it was authorized to sell; and tried to sell zinc close to market prices. DNSC’s policy for disposing of zinc is to (1) dispose of those quantities of materials as authorized by the Congress; (2) maximize revenues, though not necessarily maximize sales; and (3) be responsive to industry and congressional concerns. In addition, a policy statement was published in the October 17, 1994, Federal Register. DNSC also works closely with the Market Impact Committee. The Committee reviews a range of data and analysis compiled by DNSC and other agencies, and it may also review DNSC’s proposed sales methods. It is the Committee’s policy to solicit industry views concerning the proposed disposals. The Committee is particularly interested in any information that would indicate a potential market disruption if DNSC sold any zinc. Based on this evidence, the Committee can recommend reductions in the proposed commodity disposal levels. If DNSC refuses to accept the Committee’s recommendations, it must provide written justification with its submission of the annual materials plan to the Congress. According to the Committee, a steady, well-publicized disposal program helps increase market certainty, whereas irregular sales contributes to market uncertainty. DNSC must submit an annual materials plan to the Congress to show the quantity of materials to be disposed of, the views of the Market Impact Committee on the projected domestic and foreign economic effects of such disposals, the recommendations submitted by the Committee relative to the disposals, and justification for the disposal. Table 1 provides a summary of the amounts requested and approved. The most recent plan, submitted on February 15, 1996, requested authority to dispose of up to 50,000 tons for fiscal year 1997. The plan also included DNSC’s proposal to sell up to 50,000 tons annually until the inventory is depleted. DNSC has sold less zinc than it was authorized over the last several years. Between March 1993, when DNSC began selling zinc, and March 1996, DNSC has sold approximately 77,000 tons, although it was authorized to sell 209,000 tons. Figure 1 provides a yearly comparison of the amounts sold and amounts authorized. Industry members and metals analysts told us that the stockpile’s sales prices are as important as quantity when it comes to market disruption. AZA officials stated that DNSC was selling stockpile zinc at fire-sale prices, well below the London Metal Exchange and other market prices. Even though DNSC’s policy is that all excess materials will be sold as close to market prices as possible, its sales of zinc in 1993 and part of 1994 were at prices below the London Metal Exchange. Both the Market Impact Committee and AZA urged DNSC to raise its minimum price level, which it did, beginning in late 1994. Since 1994, the prices DNSC has accepted for zinc have been above the London Metal Exchange’s prices. The London Metal Exchange sets the world price for special high grade zinc daily. Producers add an additional charge, referred to as a premium, to the Exchange price to set their selling prices. A premium can vary by producer, sales contract, and customer, and covers such things as transportation, quality guarantees, and financing terms. As figure 2 shows, through the second quarter of fiscal year 1994, the stockpile made all sales at prices below the London Metal Exchange prices. From the fourth quarter of fiscal year 1994 to the present, all sales prices have been above the London Metal Exchange price. The relation of DNSC’s sales prices to the London Metal Exchange prices is only one measure of how closely DNSC is selling to market prices. Figure 3 compares the DNSC sales prices to both the London Metal Exchange and spot market prices from April 1995 to August 1996. The data shows that the prices for high grade and prime western grades sold by DNSC and those for spot sales in the commercial market are roughly 2 to 3 cents apart, a difference which DNSC and the Market Impact Committee believe is reasonable given that the government does not provide transportation, financing, or certification of product quality. DNSC’s terms require buyers to pay for transportation, pay for the product prior to delivery, and accept the product on an “as is” (quality not certified) basis. Commercial terms typically require the seller or producer to pay for transportation, provide for financing (often 30 to 40 days), and certify the quality of the product. The DNSC data in figure 3 represent the average sales prices for high grade and prime western zinc sold at the regular DNSC sales on the third Tuesday of every month. The spot market prices are the commercial prices, averaged, for high grade and prime western zinc, as reported by the American Metal Market for the date of each DNSC sale. The London Metal Exchange data are the prices set by the London Metal Exchange for special high grade zinc on the same day as the DNSC sales. Although the London Metal Exchange price is based on special high grade, the premium for other grades is typically marked against the special high grade price. DNSC receives bids within a wide range of prices, both above and below the London Metal Exchange. Sometimes, it receives multiple bids from a single bidder at prices above, at, and below the London Metal Exchange. DNSC must decide which ones to accept and which ones to reject. DNSC has rejected more bids than it has accepted in every year it has offered zinc for sale. (See fig. 4.) In fiscal year 1996, for example, it accepted only one of every four bids received. (App. III lists DNSC’s sales activities, including the bids accepted and bids rejected.) DNSC plans to continue to closely monitor prices when accepting bids to ensure that the market is not unduly disrupted. DNSC’s actions, we believe, demonstrate that it is paying attention to the market and is committed to avoiding an undue disruption. It is important that DNSC accept prices for its zinc that are as close to market prices as possible. We asked DOD, the Market Impact Committee, AZA, U.S.-based AZA members, and a number of other companies and organizations with whom we discussed this matter to comment on a draft of this report. DOD and the Market Impact Committee fully concurred with the report. Their comments are included as appendix IV. AZA disagreed with the report’s conclusions, stating that we reached those conclusions based on our accepting certain inaccurate government data, avoiding certain AZA facts, and introducing irrelevant material. First, while AZA agreed that the phrase “usual markets” is not defined in the act, it said that we did not properly consider congressional intent in reviewing the government’s interpretation of the phrase “usual markets.” It stated that because the legislative history indicates that the Congress was particularly concerned about the effect on the markets that stockpile sales might have, those charged with construing the phrase must choose the construction that results in the minimum amount of market impact. It is our view, however, that the legislative history does not require such an interpretation of the statute. In this regard, the legislative history, including the Senate report cited by AZA (S. Rpt. No. 804, 79th Cong., 1st Sess. 1945) shows that while the Congress was concerned about market impact, the concern was that “sudden disposals” of stockpile materials “might break the market,” not that all market disruption must be avoided. Some additional language was included in the body of the report to clarify our position. Next, AZA stated that certain materials we cited in the report were not relevant as justification for the government’s action to avoid unduly disrupting the usual zinc market. We believe the materials are relevant, but have added a figure and text comparing DNSC sales prices to spot market prices to clarify our position. Finally, AZA stated that we had not reported certain facts it believed were relevant to the dispute between the government and itself about the size of what AZA views as the usual market for high grade and prime western zinc. We have provided additional information for clarification in appendix II. The complete response of AZA and our specific comments to the points raised are included as appendix V. Of the AZA members commenting on our draft report, one fully agreed with our conclusions and another generally agreed but believed certain statements relating to uses of different grades of zinc and market factors were misleading. We have clarified the discussion on this in the final report to address these concerns. A third member said it was disappointed with our interpretation that the government’s view of the usual market has a sound basis. The members’ comments are included as appendix VI. Four other respondents—an association of zinc consumers, a zinc broker, a zinc trader, and a metals trade publication official—concurred with our findings and conclusions. Their comments are included in appendix VII. The focus of our work was on the dispute between the government and AZA as it related to the government’s interpretation of the statutory phrase “usual markets” as applied to the zinc sales program, and DOD’s efforts to not unduly disrupt the zinc market. To assess the merits of each side’s position on the government’s interpretation and its efforts not to disrupt the zinc market, we met with the Executive Director of AZA and reviewed data AZA provided us. We met with the Administrator, Deputy Administrator, General Counsel, and zinc commodity specialists at DNSC and reviewed the data they provided us. We also met with the cochairs of the Market Impact Committee and each of the Committee members and reviewed the minutes of each meeting where zinc disposals were considered during the last 3 years. And, we met with industry and metals analysts for the Department of Commerce and the Bureau of Mines (now part of the U.S. Geological Survey) to determine how they calculated the size of the zinc markets. We reviewed the applicable statute, its legislative history, and relevant court cases. We discussed the statute and its interpretation with DNSC’s counsel and with the executive director of AZA. To complement our discussions with AZA and to obtain the views on the government’s interpretation of usual markets and its efforts not to disrupt the markets, we met with each of the various groups represented in the zinc market—that is, companies producing or processing zinc, those buying and selling zinc as traders or brokers, those that consume zinc in their manufacturing processes, and individuals who study or report on the zinc markets—we reviewed various documents these companies and organizations had submitted to DNSC or the Market Impact Committee and contacted them about the government/AZA dispute and/or their particular operations. We also asked each company or organization whose correspondence we reviewed or we contacted to comment on a draft of this report. We have included copies of the responses in the appendixes. The list of companies and organizations we contacted or whose documents we reviewed were the following: Big River Zinc Corp., Sauget, Illinois Huron Valley Steel, Belleville, Michigan Savage Zinc, Inc., Clarksville, Tennessee Zinc Corporation of America, Monaca, Pennsylvania Parks-Pioneer Metals Co., Milwaukee, Wisconsin Trademet, Inc., Scarsdale, New York zinc consumers or their associations American Galvanizers Association, Aurora, Colorado Frontier Hot-Dip Galvanizing, Inc., Buffalo, New York Galvan Industries, Inc., Harrisburg, North Carolina Independent Zinc Alloyers Association, Washington, D.C. Rogers Galvanizing Company, Tulsa, Oklahoma Tennessee Galvanizing, Jasper, Tennessee U.S. Zinc, Houston, Texas metals analysts and others CRU International Ltd., London, United Kingdom International Lead/Zinc Study Group, London, United Kingdom Ryan’s Notes, Pelham, New York We visited the DNSC storage site at Letterkenny Army Depot, near Chambersburg, Pennsylvania, to examine how DNSC stores zinc and prepares it for sale. We did not assess DNSC’s sales methods—that is, its selling on the “spot” market, as opposed to selling under long-term contracts—or the impact of congressionally imposed sales price constraints. The fiscal years 1995, 1996, and 1997 DOD appropriations acts have prohibited DNSC from accepting prices from prospective bidders if zinc prices decline more than 5 percent below the London Metals Exchange market price reported on the date the act was enacted. We performed our review from December 1995 to August 1996 in accordance with generally accepted government auditing standards. We are providing copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Appropriations, Subcommittee on Defense; Senate Committee on Armed Services; House Committee on Appropriations, Subcommittee on National Security; House Committee on National Security; the Director, Office of Management and Budget; the Secretary of Defense; the Director, Defense Logistics Agency; the Administrator, DNSC; the cochairs of the Market Impact Committee; AZA; and all parties that assisted us in this review. We will also make copies available to other interested parties upon request. Please contact me on (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix VIII. Scotia, N.Y. Voorheesville, N.Y. Somerville, N.J. Marietta, Pa. Mechanicsburg, Pa. Chambersburg, Pa. Point Pleasant, W.Va. Huntsville, Ala. The American Zinc Association (AZA) and the government have long disputed the size of the usual market for high grade and prime western zinc. According to AZA’s definition of the usual markets for high grade and prime western grade slab zinc, using 1994 data, the usual market is 250,000 tons of actual consumption a year. Officials of the Department of Commerce—members of the Market Impact Committee—estimate the market of these grades to be about 350,000 tons a year, counting both slab and hot metal. AZA’s estimates are based on high grade and prime western consumption, as reported by its members, and U.S. Bureau of the Census data on imports from all countries not represented in AZA and adjusted to include stockpile sales and changes in stocks. Commerce’s estimates are based on Bureau of Mines survey data, Commerce and Census import data, and discussions with zinc importers—many of whom are AZA members. The government has revised its estimate of this market from over 600,000 tons to 446,000 tons to its current estimate of 350,000 tons. The latest revision was due primarily to revised estimates of large steel mill consumption of high grade and prime western grade and in the amount of high grade and prime western grade tonnage imported. A major factor underlying the remaining 100,000-ton difference between the two estimates is the treatment of internal hot prime western metal produced by one prime western processor and used in its zinc oxide production facility (about 62,000 tons). AZA did not include this amount in its estimate of the production of slab prime western grade zinc, stating that this is hot metal, not slab. The government agreed that this tonnage should not be reported as slab and revised the reporting of it under the heading of “zinc metal.” The government nevertheless maintains that although this prime western zinc is not converted to slab, it should be included in the estimates of the size of the high grade and prime western zinc market because prime western zinc is being consumed. An additional difference (38,000 tons) between AZA and the government is that the government’s estimates of potential domestic consumption of high grade and prime western zinc includes tonnage that “hot-dip” galvanizersuse, but that is currently being supplied by special high grade zinc. The government believes that high grade or prime western can be used for this purpose and should be used in the market size estimates. AZA, however, stated that “potential” consumption should not be considered in any discussion of usual markets. In summary, the two sides now agree with each other’s numbers, but not how those numbers are to be used. In any event, the government’s determination of undue disruption of the usual market does not depend on the specific size of the high grade and prime western market alone, but rather on the larger market for all grades of zinc. Prices accepted as measured against the London Metal Exchange price (range in percent) The following are GAO’s comments on the American Zinc Association’s letter dated September 6, 1996. 1. The final report (app. II) reflects the numbers used by the Market Impact Committee. 2. The final report (app. II) shows that the government has revised its reporting. 3. Neither we nor the Market Impact Committee has asserted that the stockpile slab could substitute for the hot metal in the particular company’s production of zinc oxide. Zinc oxide producers use slab zinc or zinc recovered from recycled materials as their feed. This particular company, as AZA pointed out, does not use slab as its feed. It uses hot metal that has not been converted into slab. Whether the prime western zinc refined by this company is first converted into slab or is kept as hot metal is not relevant to whether it is part of the high grade/prime western zinc market. 4. The final report (app. II) reflects that while the two sides now agree with each other’s numbers, they do not agree on how those numbers are to be used. In any event, the government’s determination of undue disruption of the usual market does not depend on the specific size of the high grade and prime western market alone, but rather on the larger market for all grades of zinc. Also, we revised the text to clarify the source of the numbers. 5. It is not our position that all zinc is the same, that all grades have the same uses, or that there is perfect substitution among the grades. Rather, our position is that the different grades of zinc can be considered to be in the same market because most producers can switch from one grade to another, some consumers (galvanizers) can use different grades for the same purpose, and prices of the different grades of zinc move in similar patterns. 6. As AZA points out, bids are rejected for many reasons. Some bids are “low-ball” and are rejected. However, we disagree with AZA’s comment that DNSC rejects bids because there are sometimes more bids than tonnage available for sale. Under DNSC’s current sales arrangements, there is no monthly limit as to the amount that can be sold, except as dictated by the yearly limit set forth in the annual congressional authorization. At the start of the current sales program for zinc, DNSC’s solicitation publicized that the government was soliciting bids for approximately 8 million pounds, or 4,000 tons, a month. In October 1995, the amount per month was raised to 100 million pounds, or 50,000 tons, which was the entire authorization for the year. Despite AZA’s assertion, DNSC said that it had not rejected bids because it had received more bids than the amount available for sale. DNSC indicated that the primary reason bids were rejected was because the price offered was too low and would not have maximized revenue for the government. 7. To clarify our point that DNSC is showing concern for the prices at which it sells zinc, we added figure 3 comparing DNSC’s selling prices with those for spot market transactions in the commercial market. It shows that for the period cited, DNSC’s sales prices were within 2 to 3 cents of the commercial market. Both DNSC and the Market Impact Committee believe that the difference is reasonable considering the different terms of sale for DNSC and commercial transactions. Comments from producers, consumers, and others on our draft report also support this position. DNSC’s sales require the buyer to pay for transportation from the government depot, pay for the zinc before delivery, and accept the zinc on an “as-is” basis. Commercial transactions are made on a delivered price basis, provide for 30- to 40-day financing, and have the zinc’s quality certified. 8. (See comment 5.) We have not concluded that all zinc is the same, but rather that different grades of zinc can be in the same market. Most producers can switch production from one grade of zinc to another. If a producer who is currently selling prime western or high grade zinc can get a better return on its investment by selling another grade, it may do so (after factoring in customer relationships that the producer may want to maintain). Thus, that producer’s ability to switch production to another grade means that the price decrease required to absorb additional supply, such as stockpile sales, is less than it would be if all sellers of high grade or prime western had no alternative but to continue to supply high grade or prime western zinc. 9. (See comment 8.) As stated, we did not conclude that zinc itself is fungible in all, or even most, uses, at least not given the range of price differences in the market. There are, however, some substitution possibilities for some zinc consumers, and most zinc suppliers. This limits the degree that the price of one grade of zinc will rise or fall without affecting the prices of other grades. 10. We agree that where a statutory term is undefined, the interpretation that best reflects the intent of the Congress should generally be adopted. However, contrary to the AZA statement, nothing in the act’s legislative history requires DNSC to adopt AZA’s view of usual markets. Our final report reflects this position. 11. (See comments 8 and 9.) We did not state that consumers switch from higher to lower grades of zinc. However, in commenting on our draft report, one consumer (U.S. Zinc) that uses slab zinc to produce zinc oxide indicated that it could substitute stockpile high grade for imported special high grade for most of its needs. We did say that some consumers can switch from one grade of zinc to another and this is one reason for including different grades of zinc in the same market. The 38,000 tons of high grade or prime western zinc that some hot-dip galvanizers can use, and is currently being supplied by special high grade zinc, is an example of potential consumption substitution. The following are GAO’s comments on letters from individual members of AZA. 1. For clarification, we have revised the text of the final report. 2. We did not conclude that zinc itself is fungible in all, or even most uses, at least not given the range of price differences in the market. There are, however, some substitution possibilities for some zinc consumers and most zinc suppliers. This limits the degree that the price of one grade of zinc will rise or fall without affecting the prices of other grades. Brad H. Hathaway, Associate Director Reginald L. Furr, Assistant Director J. Kenneth Brubaker, Evaluator-in-Charge Barbara L. Wooten, Evaluator Celia J. Thomas, Economist Carolyn S. Blocker, Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed issues surrounding a dispute between the American Zinc Association (AZA) and the federal government about the Department of Defense's (DOD) sale of excess zinc from the National Defense Stockpile, focusing on: (1) the government's basis for its interpretation of the statutory phrase "usual markets" as applied to the zinc sales program; and (2) DOD's efforts to not unduly disrupt the zinc market. GAO found that: (1) the statute that governs sales from the stockpile does not define the usual markets for stockpile materials; (2) accordingly, executive branch officials have discretion in identifying the relevant market for particular sales; (3) the Defense Logistics Agency's Defense National Stockpile Center (DNSC) and the Market Impact Committee, the intergovernmental group that is statutorily required to advise DNSC on the U.S. and foreign effects of sales from the stockpile, have concluded that for stockpile sales of zinc, the usual market is the total U.S. market for all grades of zinc, not just the grades being sold from the stockpile; (4) AZA considers the usual market to be the U.S. market for only the particular grades being sold from the stockpile; (5) GAO believes the government's determination has a sound basis; (6) the determination is based on practices that exist in the zinc industry, and it is consistent with the views of zinc market participants with whom GAO discussed this matter; (7) DNSC has policies and procedures for selling zinc without unduly disrupting the zinc market; (8) specifically, it has: (a) publicized its policy on timing of sales, amounts to be sold, and relation of sales prices to market prices; (b) provided plans to the appropriate congressional committees for approval; (c) sold less zinc than it was authorized to sell; and (d) given increased emphasis to selling at prices close to commercial market prices; (9) the government recognizes that stockpile sales can affect some sellers more than others, despite its attempts to minimize disruption; (10) the sales may, for example, have a greater impact on the sellers of the grades being sold from the stockpile, and a seller of one grade could be more affected than a seller of several grades; (11) the increase in zinc supplies can lower prices and cause particular producers or processors to lose business; (12) however, the Market Impact Committee contends that this is normal commercial activity, not an undue disruption; and (13) DNSC plans to continue to closely monitor prices when accepting bids to ensure that the market is not unduly disrupted.
Established in 1943, Hanford produced plutonium for the world’s first nuclear device. At the time, little attention was given to the resulting by- products—massive amounts of radioactive and chemically hazardous waste—or how these by-products were to be permanently disposed of. About 46 different radioactive elements represent the majority of the radioactivity currently residing in Hanford’s tanks. Once Hanford tank waste is separated by the WTP waste treatment process, the high-level waste stream will contain more than 95 percent of the radioactivity but constitute less than 10 percent of the volume to be treated. The low- activity waste stream will contain less than 5 percent of the radioactivity but constitute over 90 percent of the volume. The tanks also contain large volumes of hazardous chemical waste, including various metal hydroxides, oxides, and carbonates. These hazardous chemicals are dangerous to human health and can cause medical disorders including cancer, and they can remain dangerous for thousands of years. Over the years, the waste contained in these tanks has settled; today it exists in the following four main forms or layers: Vapor: Gases produced from chemical reactions and radioactive decay occupies tank space above the waste. Liquid: Fluids (supernatant liquid) may float above a layer of settled solids or under a floating layer of crust; fluids may also seep into pore spaces or cavities of settled solids, crust, or sludge. Saltcake: Water-soluble compounds, such as sodium salts, can crystallize or solidify out of wastes to form a salt-like or crusty material. Sludge: Denser water-insoluble or solid components generally settle to the bottom of a tank to form a thick layer having the consistency similar to peanut butter. DOE’s cleanup, treatment, and disposal of radioactive and hazardous wastes are governed by a number of federal and state laws and implemented under the leadership of DOE’s Assistant Secretary for Environmental Management. Key laws include the Comprehensive Environmental Response, Compensation, and Liability Act of 1980, as amended, and the Resource Conservation and Recovery Act of 1976, as amended. In addition, most of the cleanup activities at Hanford are carried out under the Hanford Federal Facility Agreement and Consent Order among DOE, the Washington State Department of Ecology, and EPA. Commonly called the Tri-Party Agreement, this accord was signed in May 1989 and has been amended a number of times since then to, among other things, establish additional enforceable milestones for certain WTP construction and tank waste retrieval activities. The agreement lays out a series of legally enforceable milestones for completing major activities in Hanford’s waste treatment and cleanup process. A variety of local and regional stakeholders, including county and local government agencies, citizen and advisory groups, and Native American tribes, also have long-standing interests in Hanford cleanup issues. These stakeholders make their views known through various public involvement processes, including site-specific advisory boards. DOE’s ORP administers Hanford’s radioactive liquid tank waste stabilization and disposition project including the construction of the WTP. The office has an annual budget of about $1 billion and a staff of 151 federal employees, of which 54 support the WTP project. Other cleanup projects at Hanford are administered by DOE’s Richland Operations Office. DOE has attempted and abandoned several different strategies to treat and dispose of Hanford’s tank wastes. In 1989, DOE’s initial strategy called for treating only part of the waste. Part of this effort involved renovating a World War II-era facility in which it planned to start waste treatment. DOE spent about $23 million on this project but discontinued it because of technical and environmental issues and stakeholder concerns that not all the waste would be treated. In 1991, DOE decided to treat waste from all 177 tanks. Under this strategy, DOE would have completed the treatment facility before other aspects of the waste treatment program were fully developed; however, the planned treatment facility would not have had sufficient capacity to treat all the waste in a time frame acceptable to EPA and the Washington State Department of Ecology. DOE spent about $418 million on this strategy. Beginning in 1995, DOE attempted to privatize tank waste cleanup. Under its privatization strategy, DOE planned to set a fixed price and pay the contractor for canisters and containers of stabilized tank waste that complied with contract specifications. If costs grew as a result of contractor performance problems, the contractor, not DOE, was to bear these cost increases. Any cost growth occurring as a result of changes directed by DOE was to result in an adjustment to the contract price and was to be borne by DOE. Under the privatization strategy, DOE’s contractor would build a demonstration facility to treat 10 percent of the waste volume and 25 percent of the radioactivity by 2018 and complete cleanup in 2028. However, because of dramatically escalating costs and concerns about contractor performance, DOE terminated the contract after spending about $300 million, mostly on plant design. Following our criticisms of DOE’s earlier privatization approach, DOE decided that a cost- reimbursement contract with incentive fees would be more appropriate than a fixed-price contract using a privatization approach for the Hanford project and would better motivate the contractor to control costs through incentive fees. In total, since 1989 when cleanup of the Hanford site began, DOE has spent over $16 billion to manage the waste and explore possible ways to treat and dispose of it. DOE’s current strategy for dealing with tank waste consists of the construction of a large plant—the WTP—to treat and prepare the waste for permanent disposal. Begun in 2000, the WTP project is over half completed and covers 65 acres and is described by DOE as the world’s largest radioactive waste treatment plant. As designed, the WTP project is to consist of three waste processing facilities, an analytical laboratory, and over 20 smaller supporting facilities to treat the waste and prepare it for permanent disposal. The three waste processing facilities are as follows (see fig. 2): Pretreatment Facility – This facility is to receive the waste from the tanks and separate it into high-level and low-activity components. This is the largest of the WTP facilities––expected to be 12-stories tall with a foundation the size of four football fields. High-Level Waste Facility – This facility is to receive the high-level waste from the pretreatment facility and immobilize it by mixing it with a glass-forming material, melting the mixture into glass, and pouring the vitrified waste into stainless-steel canisters to cool and harden. The canisters filled with high-level waste were initially intended to be permanently disposed of at a geological repository that was to be constructed at Yucca Mountain in Nevada. However, in 2010, DOE began taking steps to terminate the Yucca Mountain project and is now considering other final disposal options. In the meantime, high- level waste canisters will be stored at Hanford. Low-Activity Waste Facility – This facility is to receive the low-activity waste from the pretreatment facility and vitrify it. The containers of vitrified waste will then be permanently disposed of at another facility at Hanford known as the Integrated Disposal Facility. Constructing the WTP is a massive, highly complex, and technically challenging project. For example, according to Bechtel documents, the completed project will contain almost 270,000 cubic yards of concrete and nearly a million linear feet of piping. The project also involves developing first-of-a-kind nuclear waste mixing technologies that will need to operate for decades with perfect reliability because, as currently designed, once WTP begins operating, it will not be possible to access parts of the plant to conduct maintenance and repair of these technologies due to high radiation levels. Since the start of the project, DOE and Bechtel have identified hundreds of technical challenges that vary in their significance and potential negative impact, and significant technical challenges remain. Technical challenges are to be expected on a one-of-a-kind project of this size, and DOE and Bechtel have resolved many of them. However, because such challenges remain, DOE cannot be certain whether the WTP can be completed on schedule and, once completed, whether it will successfully operate as intended. Among others, the significant technical challenges DOE and Bechtel are trying to resolve include the following: Waste mixing—One function of the WTP will be to keep the waste uniformly mixed in tanks so it can be transported through the plant and to prevent the buildup of flammable hydrogen and fissile material that could inadvertently result in a nuclear criticality accident. The WTP project has been developing a technology known as “pulse jet mixers” that uses compressed air to mix the waste. Such devices have previously been used successfully in other materials mixing applications but have never been used for mixing wastes with high solid content like those to be treated at the WTP. In 2004 and again in 2006, we reported that Bechtel’s inability to successfully demonstrate waste mixing technologies was already leading to cost and schedule delays. Our 2004 report recommended that DOE and Bechtel resolve this issue before continuing with construction. DOE agreed with our recommendation, slowed construction on the pretreatment and high- level waste facilities and established a path forward that included larger-scale testing to address the mixing issue. In 2010, following further testing by Bechtel, DOE announced that mixing issues had been resolved and moved forward with construction. However, concerns about the pulse jet mixers’ ability to successfully ensure uniform mixing continued to be raised by the Safety Board, PNNL, and DOE engineering officials on site. As a result, in late 2011, DOE directed Bechtel to demonstrate that the mixers will work properly and meet the safety standards for the facility. According to DOE officials, no timeline for the completion of this testing has been set. Preventing erosion and corrosion of WTP components—Excessive erosion or corrosion of components such as mixing tanks and piping systems in the WTP is possible. Such excessive erosion and corrosion could be caused by potentially corrosive chemicals and large dense particles present in the waste that is to be treated. This excessive erosion and corrosion could result in the components’ failure and lead to disruptions of waste processing. Bechtel officials first raised concerns about erosion and corrosion of WTP components in 2001, and these concerns were echoed in 2006 by an independent expert review of the project. Following further testing, DOE project officials declared the issue closed in 2008. However, DOE and Bechtel engineers recently voiced concerns that erosion and corrosion of components is still a significant risk that has not been sufficiently addressed. Furthermore, in January 2012, the Safety Board reported that concerns about erosion in the facility had still not been addressed, and that further testing is required to resolve remaining uncertainties. Bechtel has agreed to do further work to resolve technical challenges surrounding erosion and corrosion of the facilities internal components; however, DOE and Bechtel have not yet agreed upon an overall plan and schedule to resolve this challenge. Preventing buildup of flammable hydrogen gas—Waste treatment activities in the WTP’s pretreatment and high-level waste facilities can result in the generation of hydrogen gas in the plant’s tanks and piping systems. The buildup of flammable gas in excess of safety limits could cause significant safety and operational problems. DOE and Bechtel have been aware of this challenge since 2002, and Bechtel formed an independent review team consisting of engineers and other experts in April 2010 to track and resolve the challenge. This team identified 35 technical issues that must be addressed before the hydrogen buildup challenge can be resolved. Bechtel has been working to address these issues. However, a 2011 DOE construction project review noted that, while Bechtel continues to make progress resolving these issues, the estimated schedule to resolve this challenge has slipped. According to DOE and Bechtel officials, Bechtel is still conducting analysis and is planning to complete the work to resolve this challenge by 2013. Incomplete understanding of waste—DOE does not have comprehensive data on the specific physical, radiological, and chemical properties of the waste in each underground waste tank at Hanford. In the absence of such data, DOE has established some parameters for the waste that are meant to estimate the range of waste that may go through the WTP in an effort to help the contractor design a facility that will be able to treat whatever waste––or combination of wastes—is ultimately brought into the WTP. In 2006, an independent review team stated that properly understanding the waste would be an essential key factor in designing an effective facility. In 2010, the Consortium for Risk Evaluation with Stakeholder Participation, PNNL, and the Safety Board reviewed the status of DOE’s plans to obtain comprehensive data on the characteristics of the waste, and each concluded that DOE and Bechtel did not have enough information about the waste and would therefore need to increase the range of possible wastes that the WTP is designed to treat in order to account for the uncertainty. Officials at PNNL reported that not having a large enough range is “a vulnerability that could lead to inadequate mixing and line plugging.” The Safety Board reported that obtaining representative samples of the waste is necessary to demonstrate that the WTP can be operated safely, but that DOE and its contractors have not been able to explain how those samples will be obtained. In its 2011 review of the WTP project, a DOE headquarters construction project review report notes that progress has been made on including additional information and uncertainties in the efforts to estimate and model the waste that will be fed to the WTP. However, DOE officials stated that more sampling of the waste is needed. An expert study is under way that will analyze the gap between what is known and what is needed to be known to design an effective facility. This study is expected to be completed in August 2014. The risks posed by these technical challenges are exacerbated because once the facility begins operating, certain areas within the WTP (particularly in the pretreatment and high-level waste facilities) will be permanently closed off to any human intervention in order to protect workers and the public from radioactive contamination. To shield plant workers from intense radiation that will occur during WTP operations, some processing tanks will be located in sealed compartments called “black cells.” These black cells are enclosed rooms where inspection, maintenance, repair, or replacement of equipment or components is extremely difficult because high radiation levels prevent access into them. As a result, plant equipment in black cells must last for WTP’s 40-year expected design life without maintenance. According to a recent review conducted by the DOE Inspector General, premature failure of these components could result in radiation exposure to workers, contaminate large portions of the WTP and/or interrupt waste processing for an unknown period. Significant failures of components installed in the WTP once operations begin could render the WTP unusable and unrepairable, wasting the billions of dollars invested in the WTP. In August 2012, DOE announced that it was asking a team of experts to examine the WTP’s capability to detect problems in the black cells and the plant’s ability to repair equipment in the black cells, if necessary. According to DOE officials, the team will, if needed, recommend design changes to improve the operational reliability of the black cells and the WTP. In addition, the Secretary of Energy has been actively engaged in the development of a new approach to managing WTP technical challenges and has assembled subject matter experts to assist in addressing the technical challenges confronting the WTP. The estimated cost to construct the WTP has almost tripled since the project’s inception in 2000, its scheduled completion date has slipped by nearly a decade, and additional significant cost increases and schedule delays are likely to occur because DOE has not fully resolved the technical challenges faced by the project. In addition, DOE recently reported that Bechtel’s actions to take advantage of potential cost savings opportunities are frequently delayed and, as a result, rising costs are outpacing opportunities for savings. DOE’s original contract price for constructing the WTP, approved in 2000, stated that the project would cost $4.3 billion and be completed in 2011. In 2006, however, DOE revised the cost baseline to $12.3 billion, nearly triple the initial estimate, with a completion date of 2019. As we reported in 2006, contractor performance problems, weak DOE management, and technical challenges resulted in these cost increases and schedule delays. A 2011 DOE headquarters review report on the WTP projected additional cost increases of $800 million to $900 million over the revised 2006 cost estimate of $12.3 billion and additional delays to the project schedule. Furthermore, in November 2011, the Department of Justice notified the state of Washington that there is a serious risk that DOE may be unable to meet the legally enforceable milestones required by legal agreement, for completing certain activities in Hanford’s WTP construction and startup activities, as well as tank waste retrieval activities. The Department of Justice did not identify the cause of the delay or specify the milestones that could be affected. As of May 2012, according to our analysis, the project’s total estimated cost had increased to $13.4 billion, and additional cost increases and schedule delays are likely, although a new performance baseline has not yet been developed and approved. DOE ORP officials warn that cost increases and schedule delays will occur as a result of funding shortfalls and will prevent the department from successfully resolving technical challenges the WTP project faces. However, from fiscal years 2007 to 2010, the project was appropriated the $690 million that DOE requested in its annual congressional budget request. In fiscal years 2011 and 2012, DOE received approximately $740 million each year––a $50 million increase over fiscal year 2010 funding. DOE project management officials and Bechtel representatives told us that $740 million for fiscal year 2012 was not enough to support planned work and, as a result, project work would slow down and project staffing levels would be reduced. However, according to senior DOE officials, including the acting Chief Financial Officer, the primary cause of the increasing costs and delayed completion has been the difficulty in resolving complex technical challenges rather than funding issues. DOE and Bechtel have not yet fully estimated the effect of resolving these technical challenges on the project’s baseline. In February 2012, DOE directed Bechtel to develop a new, proposed cost and schedule baseline for the project and, at the same time, to begin a study of alternatives that includes potential changes to the WTP’s design and operational plans to resolve technical challenges faced by the project. The study is to also identify the cost and schedule impact of these alternatives on the project. For example, according to a DOE official, one alternative Bechtel is studying is to construct an additional facility that would process the tank waste by removing the largest solid particles from the waste before it enters WTP’s pretreatment facility. This advance processing would reduce the risks posed by insufficient mixing of the waste in the pretreatment facility by the pulse jet mixers. A DOE official told us that this alternative could add $2 to $3 billion to the overall cost of the project and further delay its completion by several years. According to DOE officials, other alternatives being studied involve reducing the total amount of waste the WTP treats or operating the WTP at a slower pace for a longer period of time to accomplish its waste processing mission. However, these alternatives could delay the total time needed to process Hanford’s waste and add billions of dollars to the total cost to treat all of Hanford’s tank waste. Further delays constructing the WTP could also result in significant cost increases to treat all of Hanford’s waste. For example, DOE has estimated that a 4-year delay in the WTP start-up date could add an additional $6 to $8 billion to the total cost of the Hanford Site tank waste treatment mission. In June 2012, DOE announced that the new cost and schedule baseline Bechtel is developing would not include the pretreatment and high-level waste facilities. According to DOE officials, additional testing and analysis is needed to resolve the facilities’ technical challenges before a comprehensive new cost and schedule baseline can be completed. DOE officials responsible for overseeing the WTP project are uncertain when the new baseline for these facilities will be completed. As a result, our May 2012 cost estimate of $13.4 is highly uncertain and could grow substantially if the technical challenges that the project faces are not easily and quickly resolved. DOE and Bechtel have identified some opportunities for cost savings, but these opportunities are not always pursued in a timely fashion. For example, Bechtel has identified an estimated $48 million in savings that could be achieved over the life of the project by accelerating specific areas of the project scope. Specifically, some of these savings could be achieved by acquiring material and equipment in bulk to maintain the pace of construction activities and avoid delays. In addition, another $24 million in savings could be achieved by reducing the amount of steel, pipe, wire, and other materials needed in remaining design work. DOE reported in March 2012, however, that Bechtel’s actions to take advantage of potential cost savings opportunities are frequently delayed and, as a result, rising costs have outpaced opportunities for savings. For example, DOE reported that Bechtel continues to perform poorly in meeting planned dates for material delivery due to delayed identification and resolution of internal issues impacting procurement of plant equipment. Specifically, DOE noted that, of 95 needed project equipment deliveries scheduled for July 2011 through October 2011, 42 were delivered on time and that this poor performance trend is expected to continue. DOE is taking steps to improve its management and oversight of Bechtel’s activities, including levying penalties on the contractor for quality and safety problems but continues to face challenges to completing the WTP project within budget and on schedule. For example, DOE’s continued use of a fast-track, design-build management approach where construction on the project has moved forward before design activities are complete has resulted in costly reworking and schedule delays. DOE is taking steps to improve its management and oversight of Bechtel’s activities. For example, in November 2011, DOE’s Office of Enforcement and Oversight started an investigation into Bechtel’s potential noncompliance with DOE’s nuclear safety requirements. Specifically, this DOE office is investigating Bechtel’s processes for designing, procuring, and installing structures, systems, and components and their potential noncompliance with DOE nuclear safety requirements. If the contractor is found to not be complying with DOE requirements, DOE’s Office of Enforcement and Oversight is authorized to take appropriate action, including the issuance of notices of violations and proposed civil penalties against Bechtel. Since 2006, DOE’s Office of Enforcement and Oversight has conducted six investigations into Bechtel’s activities at WTP that resulted in civil penalty against Bechtel. Five of the six investigations involved issues related to the design and safe operation of the WTP. For example, in 2008, DOE’s Office of Enforcement and Oversight investigated Bechtel for circumstances associated with procurement and design deficiencies of equipment for the WTP and identified multiple violations of DOE nuclear safety requirements. This investigation resulted in Bechtel receiving a $385,000 fine. In addition, in January 2012, DOE’s Office of Health, Safety, and Security reported that some aspects of the WTP design may not comply with DOE safety requirements. Specifically, under DOE safety regulations, Bechtel must complete a preliminary documented safety analysis—an analysis that demonstrates the extent to which a nuclear facility can be operated safely with respect to workers, the public, and the environment. However, Bechtel’s preliminary documented safety analyses have not always kept pace with the frequently changing designs and specifications for the various WTP facilities and DOE oversight reviews have highlighted significant deficiencies in the project’s safety analyses. In November 2011, according to DOE officials, DOE ordered Bechtel to suspend work on design, procurement, and installation activities for several major WTP systems including parts of the pretreatment facility and high-level waste facility until the contractor demonstrates that these activities are aligned with DOE nuclear safety requirements. This suspension remains in effect. DOE has also taken steps to address concerns about the project’s safety culture. According to DOE’s Integrated Safety Management System Guide, safety culture is an organization’s values and behaviors modeled by its leaders and internalized by its members, which serves to make safe performance of work the overriding priority to protect workers, the public, and the environment. In 2011, the Safety Board issued the results of an investigation into health and safety concerns at WTP. The investigation’s principal conclusion was that the prevailing safety culture of the WTP project effectively defeats DOE’s policy to establish and maintain a strong safety culture at its nuclear facilities. The Safety Board found that both the DOE and Bechtel project management behaviors reinforce a subculture at WTP that deters the timely reporting, acknowledgement, and ultimate resolution of technical safety concerns. In addition, the Safety Board found a flawed safety culture embedded in the project at the time had a substantial probability of jeopardizing the WTP mission. As a result of these findings, the Safety Board made a series of recommendations to DOE to address WTP project safety problems. DOE has developed implementation plans to address the Safety Board’s recommendations. In addition, DOE itself has raised significant concerns about WTP safety culture. In 2011 DOE’s Office of Health, Safety, and Security conducted an independent assessment of the nuclear safety culture and management of nuclear safety concerns at the WTP. As a result of this assessment, DOE determined that most DOE and Bechtel WTP staff at the WTP believed that safety is a high priority. However, DOE also determined that a significant number of DOE and Bechtel staff expressed reluctance to raise concerns about safety or quality of WTP facilities design because WTP project management does not create an atmosphere conducive to hearing concerns or for fear of retaliation. Employees’ willingness to raise safety concerns without fear of retaliation is an essential element of a healthy safety culture and creating an atmosphere where problems can be identified. DOE’s assessment also determined that DOE has mechanisms in place to address safety culture concerns. For example, according to a January 2012 issued DOE Office of Health, Safety, and Security report on the safety culture and safety management of the project, the project has an employee’s concerns program and a differing professional opinion program that assist staff to raise safety concerns. In addition, the January 2012 issued report stated that several DOE reviews of the WTP project have been effective in identifying deficiencies in WTP designs and vulnerabilities that could impact the future operation of waste treatment facilities. DOE has taken some steps to improve its management and oversight of Bechtel’s activities, but some problems remain. For example, DOE’s ongoing use of a fast-track, design-build approach continues to result in cost and schedule problems. As we reported in 2006, DOE’s management of the project has been flawed, as evidenced by DOE’s decision to adopt a fast-track, design-build approach to design and construction activities, and its failure to exercise adequate and effective oversight of contractor activities, both of which contributed to cost and schedule delays. According to DOE officials, DOE’s current project management orders will not allow the use of the fast-track, design-build approach for first-of-its-kind complex facilities such as the WTP.However, DOE was able to start the project using the fast-track, design- build approach before this order was in place. In a February 2012 written statement, DOE defended the fast-track, design-build management approach for the WTP project by stating that: (1) it allows for a single contract that gives the contractor responsibility for designing, building, and commissioning the facility, thus helping ensure that the design works as expected; (2) it allows the contractor to begin construction on parts of the facility for which design was complete; and (3) doing so would encourage construction to be completed faster. According to DOE officials, construction of the WTP is currently more than 55 percent complete, though the design is only about 80 percent complete. Nuclear industry guidelines suggest that design should be complete to at least 90 percent before starting construction of nuclear facilities. Furthermore, according to current DOE orders, construction should not begin until engineering and design work on critical technologies is essentially complete, and these technologies have been tested and proven to work. According to DOE’s analysis in 2007, several years after the beginning of WTP construction, several critical technologies designed for the WTP had not yet reached this level of In addition, current DOE guidance states that the design- readiness. build approach can be used most successfully with projects that have well-defined requirements, are not complex, and have limited risks. DOE measures technology readiness using Technology Readiness Levels, which range from 1 to 9; where 9 represents a fully tested and proven technology. DOE guidance indicates that critical technologies should be at Technology Readiness Level 6 or higher before construction begins. However, in 2007, the last time DOE assessed Technical Readiness Levels for the entire project, DOE found that 14 out of 21 critical technologies assessed were at a Technology Readiness Level lower than 6. keep pace with the construction schedule, Bechtel fabricated 38 vessels containing pulse jet mixers and installed 27 of them into the WTP pretreatment and high-level waste facilities. However, according to DOE officials, Bechtel has been forced to halt construction on the pretreatment facility and parts of the high-level waste facility because it was unable to verify that several vessels would work as designed and meet safety requirements. Bechtel is currently analyzing potential alternatives that include, among other things, scrapping 5 to 10 already completed vessels and replacing them with vessels with more easily verifiable designs, according to DOE officials. The cost and schedule impact of these alternatives has not yet been fully estimated. DOE has also experienced continuing problems overseeing its contractor’s activities. For example, DOE’s incentives and management controls are inadequate for ensuring effective project management and oversight of the WTP project to ensure that the WTP project is completed within budget and on schedule. As we reported in 2006, DOE did not ensure adherence to normal project reporting requirements and as a result, status reports provided an overly optimistic assessment of progress on the project. We also questioned the adequacy of project incentives for ensuring effective project management. Specifically, because of cost increases and schedule delays, we noted that the incentive fees in the original contract—including more than $300 million in potential fees for meeting cost and schedule goals or construction milestones—were no longer meaningful. Since that time, some problems have continued. For example, Bechtel’s current contract, which was modified in 2009, allows the contractor to receive substantial incentives, such as an award fee for achieving specified project objectives, and DOE has paid this fee, although events subsequently revealed that the project was likely to exceed future cost and schedule estimates. Since 2009, DOE has paid Bechtel approximately $24.2 million or 63 percent of its $38.6 million incentive fee based, in part, on Bechtel’s adherence to cost and schedule targets and its resolution of technical challenges associated with waste mixing. However, the WTP project is now at serious risk of missing major future cost and schedule targets, and it was subsequently determined by DOE that the waste mixing technical challenges were not resolved after all. According to DOE officials, substantial further effort is needed that will take at least an additional 3 years of testing and analysis until project scientists and engineers can fully resolve this challenge. In the current contract, there is no contractual mechanism for recovering an incentive fee that was paid to a contractor for work that was subsequently determined to be insufficient, according to DOE officials. Furthermore, under its project management order, DOE is to incorporate and manage an appropriate level of risk—including critical technical, performance, schedule, and cost risks—to ensure the best value for the government. However, DOE has no assurance that the incentives included in the WTP construction contract are assisting in the effective management of these risks. The contract provides that “incentives are structured to ensure a strong financial motivation for the Contractor to achieve the Contract requirements.” However, the contract requirements have been, and continue to be, revised to provide for a longer schedule and higher cost. For example, DOE has already announced that the project will not be completed within the 2006 performance baseline and has directed the contractor to prepare a revised performance baseline. Further, since 2009, DOE has awarded $15.6 million in incentive fees to Bechtel for meeting periodic schedule and cost goals, even though the WTP’s schedule has slipped, and construction costs have continued to increase. Bechtel has estimated, as of May 2012, that costs to complete the project are currently more than $280 million over the amount specified in the construction contract. DOE’s Inspector General has also found that DOE may have awarded Bechtel fees without the contractor adequately fulfilling work. A 2012 DOE Office of Inspector General report notes that DOE may have overpaid $15 million of potentially $30 million in incentive fees for the delivery and installation of vessels into the WTP facility. When DOE learned that one of the vessels did not have quality assurance records and therefore did not conform to contract requirements, it instructed Bechtel to return $15 million of the performance fee. However, according to the DOE Office of Inspector General report, neither DOE nor Bechtel could provide evidence that the fee was returned to DOE. DOE’s oversight of Bechtel’s activities may also be hampered because project reviews, such as external independent reviews or independent project reviews—which are a key oversight mechanism—are only required by DOE’s project management order to occur at major decision points in a project. These reviews examine a project’s estimated cost, scope, and schedule and are intended to provide reasonable assurance that the project can be successfully executed on time and within budget. For example, these independent reviews are to occur when a cost and schedule baseline is completed for the project or when construction is authorized to begin. A 2006 review conducted by the U.S. Army Corps of Engineers, for example, identified serious problems with Bechtel’s progress on the WTP project and indicated that the project would significantly exceed both cost and schedule targets. In 2009, the Office of Project Management also conducted an external independent review. Such reviews are an important mechanism for overseeing DOE contractor activities. In a large, complex, multiyear project such as WTP, however, many years can pass between these critical decision points and the associated independent reviews. DOE officials noted that other reviews, such as Construction Project Reviews, were also completed between 2009 and 2011 for the WTP project. While officials stated that these reviews did examine the project’s cost and schedule, they noted that the reviews were not as extensive as the 2006 and 2009 reviews. DOE is responsible for one of the world’s largest environmental cleanup projects in which it must stabilize large quantities of hazardous and radioactive waste and prepare it for disposal at a permanent national geologic repository that has yet to be identified. By just about any definition, DOE’s WTP project at Hanford has not been a well-planned, well-managed, or well-executed major capital construction project. Daunting technical challenges that will take significant effort and years to resolve combined with a near tripling of project costs and a decade of schedule delays raise troubling questions as to whether this project can be constructed and operated successfully. Additional cost increases amounting to billions of dollars and schedule delays of years are almost certain to occur. DOE and Bechtel officials have stated that the most recent cost increases and schedule delays are the result of, among other things, Congress not providing the required funding to resolve technical issues. In our view, however, the more credible explanation continues to be DOE’s decision to build what the department itself describes as the world’s largest and most complex nuclear waste treatment plant using a fast-track, design-build strategy that is more appropriate for much simpler, smaller scale construction projects. Where nuclear industry guidelines suggest completing 90 percent of design prior to beginning construction, DOE instead began construction when design of the facility was in the early stages and insisted on developing new technologies and completing design efforts while construction was ongoing. The result has been significant design rework, and some already procured and installed equipment to possibly be removed, refabricated, and reinstalled. The technical challenges are especially acute in the WTP’s pretreatment and high-level waste facilities. Technologies for these facilities require perfect reliability over the plant’s 40-year lifetime because no maintenance or repair will be possible once waste treatment begins. According to DOE’s analysis, several critical technologies designed for the WTP have not been tested and verified as effective. Additional expensive rework in the pretreatment and high-level waste facilities, particularly in the area of waste mixing, is likely to occur. Further, an additional facility to treat tank waste before the waste arrives at the WTP’s pretreatment facility may be required. This additional facility could add billions to the cost of treating Hanford’s waste. All the while, DOE and outside experts continue to raise safety concerns, and Bechtel continues to earn incentive fees for meeting specific project objectives even as the project’s costs and timelines balloon far beyond the initially planned goals. DOE’s recent actions to identify cost savings opportunities and to hold Bechtel accountable for the significant deficiencies in its preliminary documented safety analyses and requiring the contractor to comply with DOE’s nuclear safety regulations are steps in the right direction. However, we continue to have serious concerns not only about the ultimate cost and final completion date for this complex project, but whether this project can successfully accomplish its waste treatment mission given that several critical technologies have not been tested and verified. To improve DOE’s management and oversight of the WTP project, we recommend that the Secretary of Energy take the following three actions: Do not resume construction on the WTP’s pretreatment and high-level waste facilities until critical technologies are tested and verified as effective, the facilities’ design has been completed to the level established by nuclear industry guidelines, and Bechtel’s preliminary documented safety analyses complies with DOE nuclear safety regulations. Ensure the department’s contractor performance evaluation process does not prematurely reward contractors for resolving technical issues later found to be unresolved. For example, DOE could seek to modify its contracts to withhold payment of incentive fees until the technical challenges are independently verified as resolved. Take appropriate steps to determine whether any incentive payments made to the contractor for meeting project milestones were made erroneously and, if so, take appropriate actions to recover those payments. We provided DOE with a draft of this report for its review and comment. DOE generally agreed with the report and its recommendations. In its written comments, DOE described actions under way to address the first recommendation, as well as additional steps it plans to take to address each of the report’s recommendations. DOE stated that it has recently taken action that is, in part, aligned with the first recommendation. Specifically, it issued guidance to the contractor, which directed the contractor to address remaining WTP technical and management issues sufficient to produce a high confidence design and baseline for the pretreatment and high-level waste facilities of the WTP. DOE also established a limited construction activity list for the high-level waste facility, as well as a much more limited set of construction activities in the pretreatment facility, which DOE stated will allow it to complete construction of some portions of the facilities while taking into account the unresolved technical issues. DOE stated that it believes this approach balances the intent of the recommendation and the need to continue moving forward with the project and preparations to remove waste from Hanford waste storage tanks. While this approach appears reasonable, we would caution that DOE should sufficiently monitor the construction activities to ensure that additional construction beyond the activities specifically named on the approved list not be undertaken until the technical and management issues are satisfactorily resolved. DOE also noted that the Secretary of Energy has been actively engaged in the development of a new approach to managing the WTP and, together with a group of independent subject matter experts, is working to resolve long-standing technical issues. As requested by DOE, we did incorporate information into the report to indicate the Secretary’s personal involvement in addressing the WTP issues and the technical teams assembled to help resolve these persistent technical issues. In addition, DOE stated that the department and the contractor have implemented a plan to assure that the WTP documented safety analysis will meet the department’s nuclear safety requirements and DOE established a Safety Basis Review Team that will provide a mechanism for reviewing the documented safety analyses for each facility to ensure it meets nuclear safety requirements. DOE’s planned actions to address the recommendations in this report are discussed more fully in DOE’s letter, which is reproduced in appendix I. DOE also provided technical clarifications, which we incorporated into the report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Energy; the appropriate congressional committees; the Director, Office of Management and Budget; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the individual named above, Ryan T. Coles and Janet Frisch, Assistant Directors; Gene Aloise; Scott Fletcher; Mark Gaffigan; Richard Johnson; Jeff Larson; Mehrzad Nadji; Alison O’Neill; Kathy Pedalino; Tim Persons; Peter Ruedel; and Ron Schwenn made key contributions to this report.
In December 2000, DOE awarded Bechtel a contract to design and construct the WTP project at DOE's Hanford Site in Washington State. This project--one of the largest nuclear waste cleanup facilities in the world-- was originally scheduled for completion in 2011 at an estimated cost of $4.3 billion. Technical challenges and other issues, however, have contributed to cost increases and schedule delays. GAO was asked to examine (1) remaining technical challenges, if any, the WTP faces; (2) the cost and schedule estimates for the WTP; and (3) steps DOE is taking, if any, to improve the management and oversight of the WTP project. GAO reviewed DOE and contractor data and documents, external review reports, and spoke with officials from DOE and the Defense Nuclear Facilities Safety Board and with contractors at the WTP site and test facilities. The Department of Energy (DOE) faces significant technical challenges in successfully constructing and operating the Waste Treatment and Immobilization Plant (WTP) project that is to treat millions of gallons of highly radioactive liquid waste resulting from the production of nuclear weapons. DOE and Bechtel National, Inc. identified hundreds of technical challenges that vary in significance and potential negative impact and have resolved many of them. Remaining challenges include (1) developing a viable technology to keep the waste mixed uniformly in WTP mix tanks to both avoid explosions and so that it can be properly prepared for further processing; (2) ensuring that the erosion and corrosion of components, such as tanks and piping systems, is effectively mitigated; (3) preventing the buildup of flammable hydrogen gas in tanks, vessels, and piping systems; and (4) understanding better the waste that will be processed at the WTP. Until these and other technical challenges are resolved, DOE will continue to be uncertain whether the WTP can be completed on schedule and whether it will operate safely and effectively. Since its inception in 2000, DOE's estimated cost to construct the WTP has tripled and the scheduled completion date has slipped by nearly a decade to 2019. GAO's analysis shows that, as of May 2012, the project's total estimated cost had increased to $13.4 billion, and significant additional cost increases and schedule delays are likely to occur because DOE has not fully resolved the technical challenges faced by the project. DOE has directed Bechtel to develop a new cost and schedule baseline for the project and to begin a study of alternatives that include potential changes to the WTP's design and operational plans. These alternatives could add billions of dollars to the cost of treating the waste and prolong the overall waste treatment mission. DOE is taking steps to improve its management and oversight of Bechtel's activities but continues to face challenges to completing the WTP project within budget and on schedule. DOE's Office of Health, Safety, and Security has conducted investigations of Bechtel's activities that have resulted in penalties for design deficiencies and for multiple violations of DOE safety requirements. In January 2012, the office reported that some aspects of the WTP design may not comply with DOE safety standards. As a result, DOE ordered Bechtel to suspend work on several major WTP systems, including the pretreatment facility and parts of the high-level waste facility, until Bechtel can demonstrate that activities align with DOE nuclear safety requirements. While DOE has taken actions to improve performance, the ongoing use of an accelerated approach to design and construction--an approach best suited for well-defined and less-complex projects--continues to result in cost and schedule problems, allowing construction and fabrication of components that may not work and may not meet nuclear safety standards. While guidelines used in the civilian nuclear industry call for designs to be at least 90 percent complete before construction of nuclear facilities, DOE estimates that WTP is more than 55 percent complete though the design is only 80 percent complete. In addition, DOE has experienced continuing problems overseeing its contractor's activities. For example, DOE's incentives and management controls are inadequate for ensuring effective project management, and GAO found instances where DOE prematurely rewarded the contractor for resolving technical issues and completing work. GAO recommends that DOE (1) not resume construction on WTP’s pretreatment and high-level waste facilities until, among other things, the facilities’ design has been completed to the level established by nuclear industry guidelines; (2) ensure the department’s contractor performance evaluation process does not prematurely reward contractors for resolving technical issues later found to be unresolved; and (3) take appropriate steps to determine whether any incentive payments were made erroneously and, if so, take actions to recover them. DOE generally agreed with the report and its recommendations.
Before advanced computerized techniques, obtaining people’s personal information usually required visiting courthouses or other government facilities to inspect paper-based public records, and information contained in product registrations and other business records was not generally available at all. Automation of the collection and aggregation of multiple-source data, combined with the ease and speed of its retrieval, have dramatically reduced the time and effort needed to obtain such information. Information resellers provide services based on these technological advances. We use the term “information resellers” to refer to businesses that vary in many ways but have in common the fact that they collect and aggregate personal information from multiple sources and make it available to their customers. These businesses do not all focus exclusively on aggregating and reselling personal information. For example, Dun & Bradstreet primarily provides information on commercial enterprises for the purpose of contributing to decision making regarding those enterprises. In doing so, it may supply personal information about individuals associated with those commercial enterprises. To a certain extent, the activities of information resellers may also overlap with the functions of consumer reporting agencies, also known as credit bureaus— entities that collect and sell information about individuals’ creditworthiness, among other things. To the extent that information resellers perform the functions of consumer reporting agencies, they are subject to legislation specifically addressing that industry, particularly the Fair Credit Reporting Act. Information resellers have now amassed extensive amounts of personal information about large numbers of Americans. They supply it to customers in both government and the private sector, typically via a centralized online resource. Generally, three types of information are collected: ● Public records such as birth and death records, property records, motor vehicle and voter registrations, criminal records, and civil case files. ● Publicly available information not found in public records but nevertheless publicly available through other sources, such as telephone directories, business directories, classified ads or magazines, Internet sites, and other sources accessible by the general public. ● Nonpublic information derived from proprietary or nonpublic sources, such as credit header data, product warranty registrations, and other application information provided to private businesses directly by consumers. Figure 1 illustrates how these types of information are collected and aggregated into reports that are ultimately accessed by customers, including government agencies, through contractual agreements. No single federal law governs all use or disclosure of personal information. The major requirements for the protection of personal privacy by federal agencies come from the Privacy Act of 1974 and the privacy provisions of the E-Government Act of 2002. Federal use of personal information is governed primarily by the Privacy Act of 1974, which places limitations on agencies’ collection, disclosure, and use of personal information maintained in systems of records. The act describes a “record” as any item, collection, or grouping of information about an individual that is maintained by an agency and contains his or her name or another personal identifier. It also defines “system of records” as a group of records under the control of any agency from which information is retrieved by the name of the individual or by an individual identifier. The Privacy Act requires that when agencies establish or make changes to a system of records, they must notify the public by placing a notice in the Federal Register identifying, among other things, the type of data collected, the types of individuals about whom information is collected, the intended uses of data, and procedures that individuals can use to review and correct personal information. Additional provisions of the Privacy Act are discussed in the report we are issuing today. The E-Government Act of 2002 requires that agencies conduct privacy impact assessments (PIA). A PIA is an analysis of how personal information is collected, stored, shared, and managed in a federal system. Under the E-Government Act and related OMB guidance, agencies must conduct PIAs (1) before developing or procuring information technology that collects, maintains, or disseminates information that is in a personally identifiable form; (2) before initiating any new data collections involving personal information that will be collected, maintained, or disseminated using information technology if the same questions are asked of 10 or more people; or (3) when a system change creates new privacy risks, for example, by changing the way in which personal information is being used. OMB is tasked with providing guidance to agencies on how to implement the provisions of the Privacy Act and the E-Government Act and has done so, beginning with guidance on the Privacy Act, issued in 1975. OMB’s guidance on implementing the privacy provisions of the E-Government Act of 2002 identifies circumstances under which agencies must conduct PIAs and explains how to conduct them. The Privacy Act of 1974 is largely based on a set of internationally recognized principles for protecting the privacy and security of personal information known as the Fair Information Practices. A U.S. government advisory committee first proposed the practices in 1973 to address what it termed a poor level of protection afforded to privacy under contemporary law. The Organization for Economic Cooperation and Development (OECD) developed a revised version of the Fair Information Practices in 1980 that has, with some variation, formed the basis of privacy laws and related policies in many countries, including the United States, Germany, Sweden, Australia, New Zealand, and the European Union. The eight principles of the OECD Fair Information Practices are shown in table 1. The Fair Information Practices are not precise legal requirements. Rather, they provide a framework of principles for balancing the need for privacy with other public policy interests, such as national security, law enforcement, and administrative efficiency. Ways to strike that balance vary among countries and according to the type of information under consideration. The Departments of Justice, Homeland Security, State, and the Social Security Administration reported approximately $30 million in contractual arrangements with information resellers in fiscal year 2005. The agencies reported using personal information obtained from resellers for a variety of purposes including law enforcement, counterterrorism, fraud detection/prevention, and debt collection. In all, approximately 91 percent of agency uses of reseller data were in the categories of law enforcement (69 percent) or counterterrorism (22 percent). Figure 2 details contract values categorized by their reported use. The Department of Justice, which accounted for about 63 percent of the funding, mostly used the data for law enforcement and counterterrorism. DHS also used reseller information primarily for law enforcement and counterterrorism. State and SSA reported acquiring personal information from information resellers for fraud prevention and detection, identity verification, and benefit eligibility determination. In fiscal year 2005, the Department of Justice and its components reported approximately $19 million in acquisitions from a wide variety of information resellers, primarily for purposes related to law enforcement (75 percent) and counterterrorism (18 percent). The Federal Bureau of Investigation (FBI), which is Justice’s largest user of information resellers, uses reseller information to, among other things, analyze intelligence and detect terrorist activities in support of ongoing investigations by law enforcement agencies and the intelligence community. In this capacity, resellers provide the FBI’s Foreign Terrorist Tracking Task Force with names, addresses, telephone numbers, and other biographical and demographical information as well as legal briefs, vehicle and boat registrations, and business ownership records. The Drug Enforcement Administration (DEA), the second largest Justice user of information resellers in fiscal year 2005, obtains reseller data primarily to detect fraud in prescription drug transactions. Agents use reseller data to detect irregular prescription patterns for specific drugs and trace this information to the pharmacy and prescribing doctor. DHS and its components reported that they used information reseller data in fiscal year 2005 primarily for law enforcement purposes, such as developing leads on subjects in criminal investigations and detecting fraud in immigration benefit applications (part of enforcing the immigration laws). DHS’s largest investigative component, the U.S. Immigration and Customs Enforcement, is also its largest user of personal information from resellers. It collects data such as address and vehicle information for criminal investigations and background security checks. U.S. Customs and Border Protection conducts queries on people, businesses, property, and corresponding links via a secure Internet connection. The Federal Emergency Management Agency uses an information reseller to detect fraud in disaster assistance applications. DHS also reported using information resellers in its counterterrorism efforts. For example, the Transportation Security Administration (TSA) used data obtained from information resellers as part of a test associated with the development of its domestic passenger prescreening program, called “Secure Flight.” TSA plans for Secure Flight to compare domestic flight reservation information submitted to TSA by aircraft operators with federal watch lists of individuals known or suspected of activities related to terrorism. In an effort to ensure the accuracy of Social Security benefit payments, the Social Security Administration and its components reported approximately $1.3 million in contracts with information resellers in fiscal year 2005 for purposes relating to fraud prevention (such as skiptracing), confirming suspected fraud related to workers compensation payments, obtaining information on criminal suspects for follow-up investigations, and collecting debts. For example, the Office of the Inspector General (OIG), the largest user of information reseller data at SSA, uses several information resellers to assist investigative agents in detecting benefit abuse by Social Security claimants and to assist agents in locating claimants. Regional office agents may also use reseller data in investigating persons suspected of claiming disability fraudulently. The Department of State and its components reported approximately $569,000 in contracts with information resellers for fiscal year 2005, mainly to support investigations of passport-related activities. For example, several components accessed personal information to validate familial relationships, birth and identity data, and other information submitted on immigrant and nonimmigrant visa petitions. State also uses reseller data to investigate passport and visa fraud cases. Although the information resellers that do business with the federal agencies we reviewed have taken steps to protect privacy, these measures were not fully consistent with the Fair Information Practices. Most significantly, the first four principles, relating to collection limitation, data quality, purpose specification, and use limitation, are largely at odds with the nature of the information reseller business. These principles center on limiting the collection and use of personal information and require data accuracy based on that limited purpose and limited use of the information. However, the information reseller industry presupposes that the collection and use of personal information is not limited to specific purposes, but instead can be made available to multiple customers for multiple purposes. Resellers make it their business to collect large amounts of personal information and to combine that information in new ways so that it serves purposes other than those for which it was originally collected. Further, they are limited in their ability to ensure the accuracy, currency, or relevance of their holdings, because these qualities may vary based on customers’ varying uses. Information reseller policies and procedures were consistent with aspects of the remaining four Fair Information Practices. Large resellers reported implementing a variety of security safeguards, such as stringent customer credentialing, to improve protection of personal information. Resellers also generally provided public notice of key aspects of their privacy policies and practices (relevant to the openness principle), and reported taking actions to ensure internal compliance with their own privacy policies (relevant to the accountability principle). However, while information resellers generally allow individuals limited access to their personal information, they generally limit the opportunity to correct or delete inaccurate information contained in reseller databases (relevant to the individual participation principle). In brief, reseller practices compare with the Fair Information Practices as follows: Collection limitation. Resellers do not limit collections to specific purposes but collect large amounts of personal information. In practice, resellers are limited in the personal information that they can obtain by laws that apply to specific kinds of information (for example, the Fair Credit Reporting Act and the Gramm-Leach-Bliley Act, which restrict the collection, use, and disclosure of certain consumer and financial data). However, beyond specific legal restrictions, information resellers generally attempt to aggregate large amounts of personal information so as to provide useful information to a broad range of customers. Resellers do not make provisions to notify the individuals involved when they obtain personal data from their many sources, including public records. Concomitantly, individuals are not afforded an opportunity to express or withhold their consent when the information is collected. Resellers said they believe it is not appropriate or practical for them to provide notice or obtain consent from individuals because they do not collect information directly from them. Under certain conditions, some information resellers offer consumers an “opt-out” option—that is, individuals may request that information about themselves be suppressed from selected databases. However, resellers generally offer this option only with respect to certain types of information, such as marketing products, and only under limited circumstances, such as if the individual is a law enforcement officer or a victim of identity theft. Two resellers stated their belief that under certain circumstances it may not be appropriate to provide consumers with opportunities for opting out, such as when information products are designed to detect fraud or locate criminals. These resellers stated that if individuals were permitted to opt out of fraud prevention databases, some of those opting out could be criminals, which would undermine the effectiveness and utility of these databases. Data quality. Information resellers reported taking steps to ensure that they generally receive accurate data from their sources and that they do not introduce errors in the process of transcribing and aggregating information. However, they generally provide their customers with exactly the same data they obtain and do not claim or guarantee that the information is accurate for a specific purpose. Some resellers’ privacy policies state that they expect their data to contain some errors. Further, resellers varied in their policies regarding correction of data determined to be inaccurate as obtained by them. One reseller stated that it would delete information in its databases that was found to be inaccurate. Another stated that even if an individual presents persuasive evidence that certain information is in error, the reseller generally does not make changes if the information comes directly from an official public source (unless instructed to do so by that source). Because they are not the original source of the personal information, information resellers generally direct individuals to the original sources to correct any errors. Several resellers stated that they would correct any identified errors introduced through their own processing and aggregation of data. Purpose specification. While information resellers specify purpose in a general way by describing the types of businesses that use their data, they generally do not designate specific intended uses for each of their data collections. Resellers generally obtain information that has already been collected for a specific purpose and make that information available to their customers, who in turn have a broader variety of purposes for using it. For example, personal information originally submitted by a customer to register a product warranty could be obtained by a reseller and subsequently made available to another business or government agency, which might use it for an unrelated purpose, such as identity verification, background checking, or marketing. It is difficult for resellers to provide greater specificity because they make their data available to many customers for a wide range of legitimate purposes. As a result, the public is made aware only of the broad range of potential uses to which their personal information may be put, rather than a specific use, as envisioned in the Fair Information Practices. Use limitation. Because information reseller purposes are specified very broadly, it is difficult for resellers to ensure that use of the information in their databases is limited. As previously discussed, information reseller data may have many different uses, depending on the types of customers involved. However, resellers do take steps to ensure that their customers’ use of personal information is limited to legally sanctioned purposes. Information resellers pass this responsibility to their customers through licensing agreements and contract terms and agreements. Customers are usually required to certify that they will only use information obtained from the reseller in ways permissible under laws such as the Gramm-Leach- Bliley Act and the Driver’s Privacy Protection Act. The information resellers used by the federal agencies we reviewed generally also reported taking steps to ensure that access to certain sensitive types of personally identifiable information—particularly Social Security numbers—is limited to certain customers and uses. Security safeguards. While we did not evaluate the effectiveness of resellers’ information security programs, resellers we spoke with said they employ various safeguards to protect consumers’ personal information. They implemented these safeguards in part for business reasons but also because federal laws require such protections. Resellers describe these safeguards in various policy statements, such as online and data privacy policies or privacy statements posted on Internet sites. Given recent incidents, large information resellers also reported having recently taken steps to improve their safeguards against unauthorized access. Two resellers reported that they had taken steps to improve their procedures for authorizing customers to have access to sensitive information, such as Social Security numbers. For example, one reseller established a credentialing task force with the goal of centralizing its customer credentialing process. In addition to enhancing safeguards on customer access authorizations, resellers have instituted a variety of other security controls. For example, three large information resellers have implemented physical safeguards at their data centers, such as continuous monitoring of employees entering and exiting facilities, monitoring of activity on customer accounts, and strong authentication of users entering and exiting secure areas within the data centers. Openness. To address openness, information resellers took steps to inform the public about key aspects of their privacy policies. They used means such as company Web sites and brochures to inform the public of specific policies and practices regarding the collection and use of personal information. Reseller Web sites also generally provided information about the types of information products the resellers offered—including product samples—as well as general descriptions about the types of customers served. Individual participation. Although information resellers allow individuals access to their personal information, this access is generally limited. Resellers may provide an individual a report containing certain types of information—such as compilations of public records information—however, the report may not include all information maintained by the resellers about that individual. Further, because they obtain their information from other sources, most resellers have limited provisions for correcting or deleting inaccurate information contained in their databases. If individuals find inaccuracies in such reports, they generally cannot have these corrected by the resellers. Resellers, as a matter of policy, do not make corrections to data obtained from other sources, even if the individual provides evidence that the data are wrong. Instead, they direct individuals wishing to make corrections to contact the original sources of the data. Several resellers stated that they would correct any identified errors resulting from their own processing and aggregation of data (for example, transposing numbers or letters or incorrectly aggregating information). Accountability. Although information resellers’ overall application of the Fair Information Practices varied, each reseller we spoke with reported actions to ensure compliance with its own privacy policies. For example, resellers reported designating chief privacy officers to monitor compliance with internal privacy policies and applicable laws. Information resellers reported that these officials had a range of responsibilities aimed at ensuring accountability for privacy policies, such as establishing consumer access and customer credentialing procedures, monitoring compliance with federal and state laws, and evaluating new sources of data (for example, cell phone records). Although there are no industrywide standards requiring resellers to conduct periodic audits of their compliance with privacy policies, one information reseller reported using a third party to conduct privacy audits on an annual basis. Using a third party to audit compliance with privacy policies further helps to ensure that an information reseller is accountable for the implementation of its privacy practices. In commenting on excerpts of our draft report, several resellers raised concerns regarding the version of the Fair Information Practices we used to assess their practices, stating their view that it applied more appropriately to organizations that collect information directly from consumers and that they were not legally bound to adhere to the Fair Information Practices. As discussed in our report, the version of the Fair Information Practices we used has been widely adopted and cited within the federal government as well as internationally. Further, we use it as an analytical framework for identifying potential privacy issues for further consideration by Congress—not as criteria for strict compliance. Resellers also stated that the draft did not take into account their view that public record information is open to all for any use not prohibited by state or federal law. However, we believe it is not clear that individuals give up all privacy rights to personal information contained in public records, and we believe it is important to assess the status of privacy protections for all personal information being offered commercially to the government so that informed policy decisions can be made about the appropriate balance between resellers’ services and the public’s right to privacy. In our report we suggest that Congress consider the extent to which information resellers should adhere to the Fair Information Practices. Agencies generally lacked policies that specifically address their use of personal information from commercial sources (although DHS Privacy Office officials have reported that they are drafting such a policy), and agency practices for handling personal information acquired from information resellers did not always fully reflect the Fair Information Practices. Specifically, agency practices generally reflected four of the eight Fair Information Practices. As table 2 shows, the collection limitation, data quality, use limitation, and security safeguards principles were generally reflected in agency practices. For example, several agency components (specifically, law enforcement agencies such as the FBI and the U.S. Secret Service) reported that in practice, they generally corroborate information obtained from resellers when it is used as part of an investigation. This practice is consistent with the principle of data quality. Agency policies and practices with regard to the other four principles were uneven. Specifically, agencies did not always have policies or practices in place to address the purpose specification, openness, and individual participation principles with respect to reseller data. The inconsistencies in applying these principles as well as the lack of specific agency policies can be attributed in part to ambiguities in OMB guidance regarding the applicability of the Privacy Act to information obtained from resellers. Further, privacy impact assessments, a valuable tool that could address important aspects of the Fair Information Practices, are not conducted often. Finally, components within each of the four agencies did not consistently hold staff accountable by monitoring usage of personal information from information resellers and ensuring that it was appropriate; thus, their application of the accountability principle was uneven. Agency procedures generally reflected the collection limitation, data quality, use limitation, and security safeguards principles. Regarding collection limitation, for most law-enforcement and counterterrorism purposes (which accounted for 90 percent of usage in fiscal year 2005), agencies generally limited their personal data collection in that they reported obtaining information only on specific individuals under investigation or associates of those individuals. Regarding data quality, agencies reported taking steps to mitigate the risk of inaccurate information reseller data by corroborating information obtained from resellers. Agency officials described the practice of corroborating information as a standard element of conducting investigations. Likewise, for non-law- enforcement use, such as debt collection and fraud detection and prevention, agency components reported that they mitigated potential problems with the accuracy of data provided by resellers by obtaining additional information from other sources when necessary. As for use limitation, agency officials said their use of reseller information was limited to distinct purposes, which were generally related to law enforcement or counterterrorism. Finally, while we did not assess the effectiveness of information security at any of these agencies, we found that all four had measures in place intended to safeguard the security of personal information obtained from resellers. The purpose specification, openness, and individual participation principles stipulate that individuals should be made aware of the purpose and intended uses of the personal information being collected about them, and, if necessary, have the ability to access and correct their information. These principles are reflected in the Privacy Act requirement for agencies to publish in the Federal Register, “upon establishment or revision, a notice of the existence and character of a system of records.” This notice is to include, among other things, the categories of records in the system as well as the categories of sources of records. In a number of cases, agencies using reseller information did not adhere to the purpose specification or openness principles in that they did not notify the public that they were using such information and did not specify the purpose for their data collections. Agency officials said that they generally did not prepare system-of-records notices that would address these principles because they were not required to do so by the Privacy Act. The act’s vehicle for public notification—the system-of-records notice—becomes binding on an agency only when the agency collects, maintains, and retrieves personal data in the way defined by the act or when a contractor does the same thing explicitly on behalf of the government. Agencies generally did not issue system-of-records notices specifically for their use of information resellers largely because information reseller databases were not considered “systems of records operated by or on behalf of a government agency” and thus were not considered subject to the provisions of the Privacy Act. OMB guidance on implementing the Privacy Act does not specifically refer to the use of reseller data or how it should be treated. According to OMB and other agency officials, information resellers operate their databases for multiple customers, and federal agency use of these databases does not amount to the operation of a system of records on behalf of the government. Further, agency officials stated that merely querying information reseller databases did not amount to agency “maintenance” of the personal information being queried and thus also did not trigger the provisions of the Privacy Act. In many cases, agency officials considered their use of resellers to be of this type—essentially “ad hoc” querying or “pinging” of reseller databases for personal information about specific individuals, which they believed they were not doing in connection with a formal system of records. In other cases, however, agencies maintained information reseller data in systems for which system-of-records notices had been previously published. For example, law enforcement agency officials stated that, to the extent they retain the results of reseller data queries, this collection and use is covered by the system of records notices for their case file systems. However, in preparing such notices, agencies generally did not specify that they were obtaining information from resellers. Among system of records notices that were identified by agency officials as applying to the use of reseller data, only one—TSA’s system of records notice for the test phase of its Secure Flight program—specifically identified the use of information reseller data. In several of these cases, agency sources for personal information were described only in vague terms, such as “private organizations,” “other public sources,” or “public source material,” when information was being obtained from information resellers. The inconsistency with which agencies specify resellers as a source of information in system-of-records notices is due in part to ambiguity in OMB guidance, which states that “for systems of records which contain information obtained from sources other than the individual to whom the records pertain, the notice should list the types of sources used.” Although the guidance is unclear what would constitute adequate disclosure of “types of sources,” OMB and DHS Privacy Office officials agreed that to the extent that reseller data is subject to the Privacy Act, agencies should specifically identify information resellers as a source and that merely citing public records information does not sufficiently describe the source. Aside from certain law enforcement exemptions to the Privacy Act, adherence to the purpose specification and openness principles is critical to preserving a measure of individual control over the use of personal information. Without clear guidance from OMB or specific policies in place, agencies have not consistently reflected these principles in their collection and use of reseller information. As a result, without being notified of the existence of an agency’s information collection activities, individuals have no ability to know that their personal information could be obtained from commercial sources and potentially used as a basis, or partial basis, for taking action that could have consequences for their welfare. PIAs can be an important tool to help agencies to address openness and purpose specification principles early in the process of developing new information systems. To the extent that PIAs are made publicly available, they provide explanations to the public about such things as the information that will be collected, why it is being collected, how it is to be used, and how the system and data will be maintained and protected. However, few agency components reported developing PIAs for their systems or programs that make use of information reseller data. As with system-of-records notices, agencies often did not conduct PIAs because officials did not believe they were required. Current OMB guidance on conducting PIAs is not always clear about when they should be conducted. According to guidance from OMB, a PIA is required by the E-Government Act when agencies “systematically incorporate into existing information systems databases of information in identifiable form purchased or obtained from commercial or public sources.” However, the same guidance also instructs agencies that “merely querying a database on an ad hoc basis does not trigger the PIA requirement.” Reported uses of reseller data were generally not described as a “systematic” incorporation of data into existing information systems; rather, most involved querying a database and in some cases retaining the results of these queries. OMB officials stated that agencies would need to make their own judgments on whether retaining the results of searches of information reseller databases constituted a “systematic incorporation” of information. The DHS Privacy Office has been working to clarify guidance on the use of reseller information in general as well as the specific requirements for conducting PIAs. DHS recently issued guidance requiring PIAs to be conducted whenever reseller data are involved. However, although the DHS guidance clearly states that PIAs are required when personally identifiable information is obtained from a commercial source, it also states that “merely querying such a source on an ad hoc basis using existing technology does not trigger the PIA requirement.” Like OMB’s guidance, the DHS guidance is not clear, because agency personnel are left to make individual determinations as to whether queries are “on an ad hoc basis.” Until PIAs are conducted more thoroughly and consistently, the public is likely to remain incompletely informed about agency purposes and uses for obtaining reseller information. In our report we recommended that the Director, OMB, revise privacy guidance to clarify the applicability of requirements for public notices and privacy impact assessments to agency use of personal information from resellers and direct agencies to review their uses of such information to ensure it is explicitly referenced in privacy notices and assessments. Further, we recommended that agencies develop specific policies for the use of personal information from resellers. According to the accountability principle, individuals controlling the collection or use of personal information should be accountable for ensuring the implementation of the Fair Information Practices. This means that agencies should take steps to ensure that they use personal information from information resellers appropriately. Agencies described using activities to oversee their use of reseller information that were largely based on trust in the individual user to use the information appropriately, rather than management oversight of usage details. For example, in describing controls placed on the use of commercial data, officials from component agencies identified measures such as instructing users that reseller data are for official use only, and requiring users to sign statements attesting 1) to their need to access information reseller databases and 2) that their use will be limited to official business. Additionally, agency officials reported that their users are required to select from a list of vendor-defined “permissible purposes” (for example, law enforcement, transactions authorized by the consumer) before conducting a search on reseller databases. While these practices appear consistent with the accountability principle, they are focused on individual user responsibility instead of monitoring and oversight. Agencies did not have practices in place to obtain reports from resellers that would allow them to monitor usage of reseller databases at a detailed level. Although agencies generally receive usage reports from the information resellers, these reports are designed primarily for monitoring costs. Further, these reports generally contained only high-level statistics on the number of searches and databases accessed, not the contents of what was actually searched, thus limiting their utility in monitoring usage. To the extent that federal agencies do not implement methods such as user monitoring or auditing of usage records, they provide limited accountability for their usage of information reseller data and have limited assurance that the information is being used appropriately. In summary, services provided by information resellers are important to federal agency functions such as law enforcement and fraud protection and identification. Resellers have practices in place to protect privacy, but these practices are not fully consistent with the Fair Information Practices, which resellers are not legally required to follow. Among other things, resellers collect large amounts of information about individuals without their knowledge or consent, do not ensure that the data they make available are accurate for a given purpose, and generally do not make corrections to the data when errors are identified by individuals. Information resellers believe that application of the relevant principles of the Fair Information Practices is inappropriate or impractical in these situations. However, given that reseller data may be used for a variety of purposes, determining the appropriate degree of control or influence individuals should have over the way in which their personal information is obtained and used—as envisioned in the Fair Information Practices—is critical. As Congress weighs various legislative options, adherence to the Fair Information Practices will be an important consideration in determining the appropriate balance between the services provided by information resellers to customers such as government agencies and the public’s right to privacy. While agencies take steps to adhere to Fair Information Practices such as the collection limitation, data quality, use limitation, and security safeguards principles, they have not taken all the steps they could to reflect others—or to comply with specific Privacy Act and e-Government Act requirements—in their handling of reseller data. Because OMB privacy guidance does not clearly address information reseller data, agencies are left largely on their own to determine how to satisfy legal requirements and protect privacy when acquiring and using reseller data. Without current and specific guidance, the government risks continued uneven adherence to important, well-established privacy principles and lacks assurance that the privacy rights of individuals are adequately protected. Mr. Chairmen, this concludes my testimony today. I would be happy to answer any questions you or other members of the subcommittees may have. If you have any questions concerning this testimony, please contact Linda Koontz, Director, Information Management, at (202) 512-6240, or koontzl@gao.gov. Other individuals who made key contributions to this testimony were Mathew Bader, Barbara Collier, John de Ferrari, Pamlutricia Greenleaf, David Plocher, Jamie Pressman, and Amos Tevelow. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Federal agencies collect and use personal information for various purposes from information resellers--companies that amass and sell data from many sources. GAO was asked to testify on its report being issued today on agency use of reseller data. For that report, GAO was asked to determine how the Departments of Justice, Homeland Security, and State and the Social Security Administration use personal data from resellers and to review the extent to which information resellers' policies and practices reflect the Fair Information Practices, a set of widely accepted principles for protecting the privacy and security of personal data. GAO also examined agencies' policies and practices for handling personal data from resellers to determine whether these reflect the Fair Information Practices. In fiscal year 2005, the Departments of Justice, Homeland Security, and State and the Social Security Administration reported that they used personal information obtained from resellers for a variety of purposes, including performing criminal investigations, locating witnesses and fugitives, researching assets held by individuals of interest, and detecting prescription drug fraud. The agencies spent approximately $30 million on contractual arrangements with resellers that enabled the acquisition and use of such information. About 91 percent of the planned fiscal year 2005 spending was for law enforcement (69 percent) or counterterrorism (22 percent). The major information resellers that do business with the federal agencies GAO reviewed have practices in place to protect privacy, but these measures are not fully consistent with the Fair Information Practices. For example, the principles that the collection and use of personal information should be limited and its intended use specified are largely at odds with the nature of the information reseller business, which is based on obtaining personal information from many sources and making it available to multiple customers for multiple purposes. Resellers believe it is not appropriate for them to fully adhere to these principles because they do not obtain their information directly from individuals. Nonetheless, in many cases, resellers take steps that address aspects of the Fair Information Practices. For example, resellers reported that they have taken steps recently to improve their security safeguards, and they generally inform the public about key privacy principles and policies. However, resellers generally limit the extent to which individuals can gain access to personal information held about themselves, as well as the extent to which inaccurate information contained in their databases can be corrected or deleted. Agency practices for handling personal information acquired from information resellers did not always fully reflect the Fair Information Practices. That is, for some of these principles, agency practices were uneven. For example, although agencies issued public notices when they systematically collected personal information, these notices did not always notify the public that information resellers were among the sources to be used. This practice is not consistent with the principle that individuals should be informed about privacy policies and the collection of information. Contributing to the uneven application of the Fair Information Practices are ambiguities in guidance from the Office of Management and Budget regarding the applicability of privacy requirements to federal agency uses of reseller information. In addition, agencies generally lack policies that specifically address these uses.
The Army spends about $1.3 billion annually on depot maintenance work that includes the repair, overhaul, modification, and upgrading of aircraft, tracked and wheeled combat vehicles, and electronic items. It also includes limited manufacture of parts, technical support, testing, and software maintenance. This work generally requires extensive shop facilities, specialized equipment, and skilled technical and engineering personnel. Depot maintenance work is generally performed by government employees in government-owned and operated depots and by private sector employees in government-owned or contractor-owned facilities. During World War II, at a time when the Army was purchasing massive quantities of new, modernized, and more sophisticated weapon systems, an emerging requirement for depot level support was met largely by the creation of government-owned and operated depots. This capability was expanded to meet the demands of Cold War contingency requirements and to provide peacetime depot-level support for an expanded array of Army systems and equipment. By 1976, 10 Army depots performed maintenance work in the continental United States and 2 in Europe. Since the mid-1970s, our agency and others have reported on the redundancies and excess capacity that existed in DOD’s depot maintenance operations and facilities, including those owned by the Army. (A list of related GAO reports and testimonies is attached.) In recent years, major force structure reductions following the end of the Cold War have substantially reduced depot maintenance requirements and increased the amount of costly excess capacity. The problem of excess capacity, for the most part, has been addressed through the BRAC process. Prior to the process, some downsizing of the Army depot system was achieved through the closure of the Sharpe, California, and Pueblo, Colorado, maintenance depots. During the first three BRAC rounds in 1988, 1991, and 1993, the process determined that three of the Army’s eight remaining maintenance depots should be closed. Consequently, maintenance work ceased at depots located in Lexington, Kentucky, Sacramento, California, and Tooele, Utah, with most workloads from the closing depots transferred to other DOD depots. The February 28, 1995, report from the Secretary of Defense to the Chairman of the BRAC Commission recommended realignment of the Red River and Letterkenny depot-level maintenance missions. The report recommended that the Red River and Letterkenny ground combat vehicle maintenance missions be transferred to the Anniston depot. It also recommended changing the 1993 BRAC Commission recommendation to consolidate tactical missile maintenance at Letterkenny by transferring the missile guidance system maintenance workload to the Tobyhanna depot. The BRAC Commission recommended that the Red River depot be downsized rather than closed. Citing concern that complete closure of the Red River depot would adversely affect ground combat vehicle readiness and sustainability, the Commission concluded that capability for the depot-level maintenance of ground combat vehicles should be maintained at more than one Army depot. The Commission recommended that all maintenance work pertaining to the Bradley family of vehicles be retained at the Red River depot and that other workloads be transferred to other depot maintenance activities, including the private sector. The Commission agreed with the Secretary of Defense’s recommendation to realign depot-level maintenance at the Letterkenny depot to other depots or the private sector. It recommended the (1) transfer of towed and self-propelled combat vehicle maintenance workloads to the Anniston depot and missile guidance system maintenance workload to the Tobyhanna depot or the private sector and (2) retention of an enclave for conventional ammunition storage and tactical missile disassembly and storage at Letterkenny. Table 1 identifies the five remaining Army depot-level maintenance activities, provides a general description of each depot’s workload, and highlights the potential affect of the implementation of BRAC decisions. In developing its March 1996 report to Congress entitled Depot-Level Maintenance and Repair Workload, the Army reported that it would privatize workloads assigned to depots being realigned. This included privatizing, either in-place or at existing contractor locations, the maintenance of various trucks, semitrailers, and troop support equipment maintained by government employees assigned to the Red River depot. Most of this work was received from the Tooele depot, which the Commission recommended for closure in 1993. The March 1996 workload report also included consolidating tactical missile maintenance workload and maintenance requirements for the Paladin light artillery combat vehicle to government-owned, contractor-operated (GOCO) facilities to be located on the existing Letterkenny installation. Army officials stated that these plans have not yet been finalized and are dependent on the repeal of the 60/40 provision in 10 U.S.C.2466, which limits the amount of depot maintenance funds that can be used for private-sector performance. The Army Materiel Command is responsible for planning, managing, and implementing the BRAC Commission’s closure and realignment recommendations. The Army Industrial Operations Command, a subordinate activity under the Materiel Command, provides management support and oversight of Army depot operations. In July 1995, the Army developed preliminary implementation plans regarding the distribution of workload from depots affected by the 1995 BRAC. However, as of August 5, 1996, these plans had not been finalized. Our review is based on the Army’s plans as described to us as of that date. The Army continues to have substantial excess capacity within its depot maintenance system. Although still evolving, Army plans for allocating some workloads from realigned depots to remaining depots will likely achieve some excess capacity reduction and savings at two activities. However, in the context of the Army’s overall depot maintenance operations, there are opportunities for achieving greater efficiencies and cost-effectiveness. In particular, tentative plans to privatize-in-place certain workloads would result in an estimated 4-percent increase in excess capacity over the next 3 years. Consequently, these plans do not appear to be cost-effective. By consolidating these workloads with similar work at remaining Army depots, the fixed overhead costs would be spread over a larger number of items, decreasing the per unit costs of depot-maintenance workloads. Additionally, since private-sector contractors also have significant excess capacity in existing manufacturing and repair facilities, privatization-in-place at either the Letterkenny or Red River depot would also aggravate excess capacity conditions in the private sector. Further, it is questionable that major excess capacity reductions will be achieved from public-private sector joint ventures at this time. Tentatively planned workload transfers from implementing BRAC Commission recommendations should result in some increase in capacity utilization and reduction in costs at two of the remaining Army depots—if the planned work materializes and the gains are not offset by future workload reductions in other areas. The Anniston depot is scheduled to receive combat vehicle workloads from the Letterkenny and Red River depots between 1996 and 1999. Additionally, the Tobyhanna depot is expected to receive the common-use ground communication and electronics workload from a closing Air Force depot at McClellan Air Force Base in Sacramento, California. However, based on presidential direction, this transition has been delayed until the year 2001—an action that will increase transition costs and decrease anticipated savings from the planned workload realignment. The Army tentatively plans to transfer about 1.2 million direct labor hours of workload to Anniston from two realigned maintenance depots. A workload transfer of this magnitude—if funded at this level, with no further reductions in the Anniston depot’s remaining workload—would increase Anniston’s overall capacity utilization in fiscal year 1999 from 40 percent to 66 percent. By improving the facility utilization and spreading the fixed overhead over a larger volume of workload, Anniston’s hourly operating costs could be reduced by about $14 (from about $98 to $84). Anniston officials estimated that the one-time cost to transfer these workloads is $23.4 million. The transition costs include expenditures for relocating equipment from the realigned depots, purchasing new equipment, improving facilities, and related personnel actions. The size of the workload being transferred could represent up to about 680 staff years. However, because Anniston’s current workload is declining and the skills required to perform the transferring work are similar to those required for the current work, Army officials told us the receiving depot can absorb the new workload without an increase in personnel. The 1995 BRAC Commission recommended that the Red River depot be downsized by transferring all non-Bradley vehicle workloads to other depot maintenance activities, including the private sector. The Army tentatively plans to transfer all non-Bradley related core workloads to the Anniston depot. This workload includes about 719,000 direct labor hours of fiscal year 1999 programmed work for M113 armored personnel carriers and M9 armored combat earthmovers. Anniston depot officials plan to begin receiving the new workloads during fiscal year 1997 and plan to be in full production by fiscal year 1999. The 1995 BRAC Commission also recommended the transfer of all self-propelled and towed artillery maintenance work from Letterkenny to Anniston. To comply, the Army tentatively plans to transfer about 460,000 direct labor hours of fiscal year 1999 programmed workload. The Anniston depot officials plan to initiate training in August 1996 to facilitate the orderly transition of the Letterkenny workload. According to Anniston depot officials, industrial equipment will be moved and some new equipment will be procured during the first and second quarters of fiscal 1997. In formulating its 1995 recommendation to close McClellan Air Force Base, the BRAC Commission recommended the transfer of the common-use ground communication-electronics workload to the Tobyhanna Army Depot. This workload, which includes items such as radar, radio communications, electronic warfare, navigational aids, electro-optic and night vision devices, satellite sensors, and cryptographic security equipment, is currently estimated to be 1.2 million direct labor hours annually. A workload transfer of this magnitude, if funded at this level, would increase Tobyhanna’s capacity utilization from 49 percent to 65 percent, reduce the labor rate by $6 (from $64 to $58), and produce an annualized savings of about $24 million. However, the Air Force is delaying transfer of this work until the year 2001 in response to the President’s direction that 8,700 jobs be retained at McClellan until the year 2001 to minimize the economic impact on the local community. According to Army officials, delaying all of the workload transfers until the year 2001 could require the Tobyhanna depot to undergo a reduction-in-force, followed by a costly rehiring and retraining situation when the Air Force workloads are eventually transferred. As a result of a declining workload, Tobyhanna is downsizing its personnel during 1996 with a voluntary separation of about 250 personnel. Army officials said that an involuntary separation of about 800 personnel may also be required in fiscal year 1997 or 1998 if no additional workloads are transferred to Tobyhanna. This reduction would include the loss of personnel having critical skills and competencies needed to perform the ground communications workload. The Army maintains that its tentative privatization plans will be more cost-effective than transferring workloads to one of the remaining DOD depots. However, cost-benefit analyses are incomplete and are based on unsupported savings assumptions. Furthermore, plans to privatize workloads at facilities that the BRAC Commission recommended for realignment will not achieve the BRAC objective of reducing costly excess capacity. The Army is not likely to achieve (1) the $953-million savings the BRAC Commission projected from realigning the Letterkenny depot if it privatizes-in-place the tactical missile and Paladin workloads or leaves a substantial government tactical missile maintenance workload ongoing at Letterkenny or (2) the $274-million savings the Commission projected from downsizing operations at the Red River depot and transferring work to other DOD depots. Also, for readiness reasons, the Red River depot is being retained, despite significant excess capacity and rising operation costs. Despite movement of some workloads to remaining Army depots, implementation of the BRAC Commission’s recommendations, as reflected in DOD’s report to Congress, will likely result in excess capacity at the four remaining government-owned and operated depots, increasing from 42 percent to 46 percent. This increase is caused by a number of factors, including (1) a forecasted decrease in future year depot-level maintenance workload; (2) the Army’s tentative decision to establish a GOCO facility at Letterkenny for tactical missile and Paladin combat vehicle work rather than transfer the work to another DOD depot; (3) the BRAC recommendation, for readiness reasons, to downsize, rather than close, the Red River depot; and (4) the Defense Depot Maintenance Council’s decision supporting the Air Force’s plan to delay transfer of the ground communications-electronics workload from the Sacramento Air Logistics Center to the Tobyhanna Army Depot until the year 2001. Table 2 shows maximum potential capacity and current excess capacity for the Army’s five depots based on programmed fiscal year 1996 workload. This table does not reflect the Army’s tentative workload transfer and privatization plans. The BRAC Commission’s recommendation to realign the Letterkenny depot and downsize the Red River depot was based on anticipated savings from eliminating costly excess capacity, reducing base operation costs, and reducing personnel by consolidating similar workloads at other underutilized depots. While potential privatization initiatives could reduce the total number of personnel currently required to perform various workloads, they are not likely to achieve the $1.227 billion ($953 million at Letterkenny and $274 million at Red River) savings that the BRAC Commission projected could be achieved by implementing BRAC recommendations at Letterkenny and Red River. Recent changes in force structure and military strategies have created significant excess capacities in private manufacturing and repair facilities, as well as in military depots. Industry representatives state that the private sector has been reducing its excess capacity through mergers, closures, and consolidations, but DOD has not made comparable reductions in the military depot infrastructure. A recent Defense Science Board Study concluded that privatization-in-place should be avoided because this approach to downsizing results in the preservation of surplus capacity. The Army’s privatization plans include an unsupported assumption that private-sector firms will perform the work for 20 percent less than an Army depot. Army officials told us the 20-percent savings assumption is based on statements in the May 1995 Commission on Roles and Missions’ report entitled Directions for Defense. We have reported that privatization savings reported by the Commission do not apply to depot maintenance because of limited or no private-sector competition and the existence of excess public depot capacity that increases the cost of performing depot maintenance work in remaining DOD depot facilities. For example, the Commission’s privatization savings estimate was based on studies of public-private competitions under Office of Management and Budget Circular A-76. These competitions were generally for simple, routine, and repetitive tasks that required little capital investment, such as grounds maintenance, motor pool operations, and stocking shelves. In these competitions, which attracted a large number of private-sector offerors, public activities were also allowed to participate and won about half. Further, saving projections were based on estimates, and our work and defense audit reports have shown that projected savings for contracted services were often not achieved due to cost growth and other factors. Consolidating the tactical missile workload at the Tobyhanna depot could significantly improve the utilization at that depot and decrease costs by as much as $27 million annually. However, the Army plans to privatize-in-place tactical missile workloads at the Letterkenny depot without determining the cost-effectiveness of transferring all the work to the Tobyhanna depot and the potential for reducing excess capacity. Additionally, privatizing the missile workload, which has traditionally been defined as core, will require a risk assessment. The Army has not conducted a formal risk assessment. To support its privatization plans, in January 1996, the Army Materiel Command requested its Industrial Operations Command to develop a cost-benefit analysis to support the proposed GOCO operation for tactical missiles at the Letterkenny Army Depot. The Operations Command was asked to analyze cost benefits for (1) transferring 14 percent of Letterkenny’s missile workload to Tobyhanna with the remaining workload to be performed in a government-owned, government-operated depot at the current Letterkenny location; (2) transferring all of Letterkenny’s tactical missile work to a government-owned, contractor-operated facility; and (3) establishing a government-owned, contractor-operated depot for 14 percent of Letterkenny’s workload with the remaining work continuing to be performed in a government-owned and government-operated depot at the current Letterkenny location. The request did not ask for an assessment of the costs and benefits of transferring the complete missile maintenance workload package to Tobyhanna. Army Materiel Command officials told us that they interpret the 1995 BRAC Commission recommendation to transfer missile guidance system workloads from Letterkenny to Tobyhanna to only include the work required on circuit cards installed in six Air Force and Navy missile systems. This work represents less than 14 percent of the consolidated missile maintenance workload package at Letterkenny. Based on this interpretation, the Army could choose to retain 86 percent of DOD’s consolidated missile maintenance workload at Letterkenny as a government-owned, government-operated facility. Army Materiel Command officials told us that work on the requested cost-benefit analyses is in a “strategic pause” pending action by Congress to repeal or modify 10 U.S.C. 2466, which currently prohibits the use of more than 40 percent of the funds made available in a fiscal year for depot-level maintenance or repair for private-sector performance—the 60/40 provision. However, Materiel Command officials also told us they have no current plans to analyze options to transfer the total tactical missile workload package to the Tobyhanna depot. They stated that if the 60/40 provision is not repealed, the preferred option may be to establish a joint military-contractor partnership, with up to 86 percent of the tactical missile work retained under military ownership and operation while about 14 percent would be performed in the military-owned facility by contractor personnel. Determining the most cost-effective alternative for performing the tactical missile workload would require an assessment of the costs and benefits that could be achieved from transferring the full tactical missile workload package to Tobyhanna. Our analysis shows that transfer of the complete missile maintenance workload package, estimated at about 1.5 million direct labor hours in fiscal year 1999, would reduce Tobyhanna’s excess capacity from about 51 percent to about 31 percent. Further, by consolidating the tactical missile workload in this facility and spreading fixed overhead costs over a larger amount of work, the Tobyhanna depot’s hourly operating costs could be reduced by about $6, resulting in annualized savings of about $27 million. The transfer of both the electronics workload from McClellan Air Force Base and the missile workload from Letterkenny would increase Tobyhanna’s overall facility utilization to about 85 percent of maximum potential capacity—based on the standard 5-day week, single 8-hour per day shift—and result in projected annualized savings of about $51 million. Additionally, the BRAC Commission identified 20-year savings of $953 million from realigning the Letterkenny depot. These savings are not likely to be achieved if the Army privatizes-in-place at Letterkenny or continues to operate much of the missile workload as a government-owned and operated depot. The cost-effectiveness of the Army’s plan to continue a GOCO facility at the Letterkenny depot to support the Paladin self-propelled artillery vehicle until the year 2001 is questionable—particularly given that the capacity and capability to perform the work at the Anniston depot currently exist. Also, continuing work at the Letterkenny depot would require continued funding of fixed overhead costs at that facility. The Letterkenny depot has an ongoing partnership arrangement with private industry to upgrade and modernize the Paladin. Government employees overhaul and refurbish M109 chassis and private-sector employees fabricate and install new gun mounts and turrets. System integration is accomplished by employees from both sectors. The current upgrade program is scheduled to be completed in fiscal year 1999. However, the Army tentatively plans to terminate government employee participation in the program in fiscal year 1997 by changing to 100 percent contractor employee support for the final 2 years of the program. An Army Materiel Command official informed us that the Army leadership met informally and determined that transferring capability for future Paladin maintenance requirements to the private sector using employees from the realigning Letterkenny depot would be less risky and costly than establishing capability at another DOD depot. A formal risk assessment of this workload transfer to the private sector has not been documented as required by DOD policy before privatizing core workloads. Anniston Army depot officials stated that this depot already has repair capability for the M109 family of vehicles, which are similar to the Paladin. Prior workload data show that the depot currently overhauls or repairs eight to nine of these vehicles per year. Further, the Anniston depot would continue to operate with at least 25 percent excess capacity, even after the transfer of core workload expected to come from the Red River depot. The consolidation of Paladin workload would further improve the utilization of the Anniston facility. On the other hand, if the Paladin workload continues to be conducted at Letterkenny, it will require continued funding of fixed overhead at that facility, which otherwise would be eliminated, whether operated as a government facility or a contractor facility. A determination of the most cost-effective source of repair for future Paladin work would require an analysis of overhead costs at both potential repair locations. The Army’s tentative plans to privatize tactical wheeled vehicle and troop support equipment workloads currently assigned to the Red River depot are based on the assumption that 20 percent savings can be achieved through privatization, as concluded by the Commission on Roles and Missions. However, a comprehensive economic analysis to document the benefits the Army expects to achieve has not been completed. Various problems and unresolved issues have delayed privatization efforts. For example, the Army Materiel Command has not determined if these workloads will be privatized-in-place or awarded to contractors having the existing capability and capacity to perform work at other locations. Further, initial efforts by the Army to award repair contracts have been delayed because technical data and workload specifications lack the specificity to solicit offers from private-sector contractors. The 1993 BRAC Commission recommended closure of maintenance facilities at the Tooele depot and transfer of workloads to other maintenance activities, including the private sector. In the Army’s initial plan for terminating maintenance work at the Tooele depot, it planned to transfer all of its tactical wheeled and troop support equipment maintenance workloads to the Red River depot. However, in May 1994, the Army Materiel Command determined that because these workloads did not support core capabilities, they would be offered for privatization.Subsequently, the Army transferred the maintenance mission for these systems to the Red River depot for interim support, pending award of repair contracts to private-sector firms. However, privatization of these systems has been delayed because the Army lacked detailed technical data, including component tolerances and workload descriptions, that are required to conduct competitions. The 1995 BRAC Commission recommended that all Red River work other than the Bradley family of fighting vehicles should be moved to other depot maintenance activities, including the private sector. This recommendation did not discuss privatizing-in-place the workloads at Red River. While the Army has reported to Congress that this workload will be privatized, a comprehensive cost analysis has not yet been completed. Army Materiel Command officials told us that a comprehensive cost-benefit analysis supporting privatization plans for the workload previously assigned to Tooele was initiated in September 1995. Results of this analysis were to be validated by the Army Audit Agency. It is not clear whether these plans include privatizing-in-place the workloads at Red River or accomplishing the work at existing contractor facilities. The officials also said that their preliminary analysis assumes that private-sector contractors can accomplish the workloads for 20 to 30 percent less than the current costs of performing this work in Army depots. Recently, Command officials informed us that work on this analysis was suspended, pending action by Congress to repeal the 60/40 provision in 10 U.S.C. 2466 and 10 U.S.C. 2469, which requires competitive procedures that include the participation of public and private entities prior to privatizing depot maintenance workloads valued at not less than $3 million. Our analysis shows that, if the Army were to transfer, rather than privatize, wheeled vehicle and troop support equipment workloads, the work would absorb about 20 percent of Anniston’s existing excess capacity. Anniston depot officials also told us they could support the additional workloads with their current workforce. DOD recommended closure of the Red River depot. However, the BRAC Commission recommended that the depot be downsized rather than closed. The BRAC Commission was concerned that complete closure of the depot would adversely affect ground combat vehicle readiness and sustainability and concluded that capability for the depot-level maintenance of ground combat vehicles should be maintained at more than one depot. The Commission recommended that all maintenance work pertaining to the Bradley family of vehicles be retained at the Red River depot and that other workloads be transferred to other depot maintenance activities, including the private sector. This decision will leave the Red River depot with about 86 percent excess capacity and substantially increased operating costs. An Army Materiel Command analysis projected that costs for residual Bradley-related workloads will increase by about $15 per hour because fixed overhead costs will be allocated to a much smaller workload base. To illustrate the impact, the Red River depot currently is authorized 2,400 civilian employees to produce about 2 million direct labor hours of maintenance output. Of this number, overhead personnel account for about 21 percent of the depot workforce. After downsizing operations, the depot will produce 529,000 direct labor hours with an authorization of 1,476 civilians. The number of overhead personnel remains essentially unchanged under the downsized mode of operations, but the percentage of overhead personnel to total employees increases to about 35 percent. Army officials stated they plan to consider options for reducing the number of overhead positions that will remain at the depot once it is downsized. In April 1996, we testified that privatizing DOD depot maintenance activities, if not effectively managed, including the downsizing of remaining depot infrastructure, will exacerbate existing excess capacity problems and the inefficiencies inherent in underuse of depot maintenance capacity. DOD officials have stated they plan joint public-private ventures to more efficiently use remaining DOD depot capabilities and reduce excess capacity. While these initiatives have some potential, it is doubtful whether they will significantly reduce excess capacity in the Army. Traditionally, working relationships between public depots and the private sector are characterized either by a DOD depot providing equipment, facilities, and materials to a prime contractor for independent repair and modernization programs or by an original equipment manufacturer providing new parts to the depot for use in the repair of government-owned assets. The Army has initiatives underway and additional plans to use some of its excess depot infrastructure through joint ventures with private industry. For example, as of June 1996, the Anniston depot had 10 programs underway or completed and 5 more planned. These projects involve (1) sharing depot-level workload on major weapon systems, (2) providing depot resources to private business, and (3) allowing private-sector use of depot facilities. Depot representatives told us the partnering, subcontracting, and leasing of depot facilities serve as a vehicle to develop new working relationships with the private sector and to make better use of the resources and capabilities that each has to offer. For example: Anniston’s largest shared work program is the M1/M1A2 tank upgrade program. Anniston depot employees disassemble the tank, prepare the hull for reassembly, and refurbish selected major assemblies such as the turbine engine and hull electronic components. General Dynamics Land Systems Division employees, located in Lima, Ohio, receive the components from Anniston, build the new turret structure, and assemble the upgraded tank for delivery to combat units. While an example of a joint venture, this program has no affect on Anniston’s excess capacity. A completed Anniston project provided depot resources to private industry for the fabrication of specialized mining equipment. Under a direct sales agreement for this nonmilitary project, the depot was a subcontractor to United Defense Limited Partnership Steel Products Division and was responsible for the manufacture of certain parts needed for specialized mining equipment used by a mid-western power company. The Anniston work included welding, machining, assembling, and painting of conventional face conveyor pan sections for the specialized mining equipment. In a planned project, Anniston employees will provide cleaning, welding, machining, asbestos removal, and painting support to General Dynamics Land Systems Division in a joint venture to upgrade FOX Nuclear, Biological and Chemical Reconnaissance vehicles. In accomplishing this project, contractor and depot personnel will use 28,000 square feet of underutilized depot infrastructure. It is too early to fully assess the potential impact of these and similar initiatives. Army officials believe emerging results of the earliest programs indicate that the concept has potential for preserving needed industrial base capabilities and improving the use of DOD depot-level maintenance facilities. However, Tobyhanna depot officials told us their attempts to get approval for various joint public-private depot projects have largely been unsuccessful because of various statutory constraints. We found there were numerous impediments to implementation of various joint venture initiatives. For example, 10 U.S.C. 4543 provides nine conditions that must be present in order for certain Army industrial facilities to sell manufactured articles or services outside DOD, including the requirement that the services cannot be obtained from a private-sector source within the continental United States. Depot officials stated that it would be unusual for there not to be at least one private-sector provider for most depot activities. Also, 10 U.S.C. 2471 requires that when depot equipment and facilities are leased to a private-sector firm, reimbursements must be made to the U.S. Treasury as miscellaneous receipts rather than to the depot providing the facilities. This provision reduces the incentive for the services to enter into such arrangements. Unless these and other statutes are revised, dual use initiatives may have limited promise for significantly improving the utilization and decreasing excess capacity of Army depots. As we have previously reported, various statutory restrictions may affect the extent to which DOD depot-level workloads can be converted to private-sector performance, including 10 U.S.C. 2464, 10 U.S.C. 2466, and 10 U.S.C. 2469. Title 10 U.S.C. 2464 provides for a “core” logistics capability to be identified by the Secretary of Defense and maintained by DOD unless the Secretary waives DOD performance as not required for national defense. Titles 10 U.S.C. 2466 and 10 U.S.C. 2469 affect the extent to which depot-level workloads can be converted to private-sector performance. Title 10 U.S.C. 2466 prohibits the use of more than 40 percent of the funds made available in a fiscal year for depot-level maintenance or repair for private sector performance: the so-called “60/40” rule. Title 10 U.S.C. 2469 provides that DOD-performed maintenance and repair workloads valued at not less than $3 million cannot be changed to performance by another DOD activity without the use of “merit-based selection procedures for competitions” among all DOD depots and that such workloads cannot be changed to contractor performance without the use of “competitive procedures for competitions among private and public sector entities.” While each statute has some impact on the allocation of DOD’s depot-level workload, 10 U.S.C. 2469 is the primary impediment to privatization without a public-private competition. The competition requirements of 10 U.S.C. 2469 have broad application to all changes to the depot-level workload valued at not less than $3 million currently performed at DOD installations, including the Army depots at Red River and Letterkenny. The statute does not provide any exemptions from its competition requirements and, unlike most of the other laws governing depot maintenance, does not contain a waiver provision. Further, there is nothing in the Defense Base Closure and Realignment Act of 1990—the authority for the BRAC recommendations—that, in our view, would permit the implementation of a recommendation involving privatization outside the competition requirements of 10 U.S.C. 2469. The determination of whether any single conversion to private-sector performance conforms to the requirements of 10 U.S.C. 2469 depends upon the facts applicable to the particular conversion. DOD has not yet finalized its privatization plans for either the Letterkenny or Red River depot nor, as of the date of this report, has DOD informed us how it plans to comply with the statutory restrictions in these proposed conversions. It is unclear whether the planned conversions will comply with the requirements of existing law. We recommend that the Secretary of Defense direct the Secretary of the Army to take the following actions. Develop required capability in military depots to sustain core depot repair and maintenance capability for Army systems and conduct and adequately document a risk assessment for mission essential workloads being considered for privatization. Use competitive procedures, where applicable, to assure the cost-effectiveness of privatizing Army depot maintenance workloads. Evaluate the cost-effectiveness of consolidating all of the Letterkenny tactical missile workload at the Tobyhanna depot, including an assessment of the fixed cost savings impact on the workload currently maintained at Tobyhanna. Assess alternatives for reducing the costs of operating the Red River depot, given the extensive excess capacity that will remain at that facility after implementation of the 1995 BRAC recommendations. Complete cost analyses of the Army’s proposed privatization initiatives, including the Paladin self-propelled artillery vehicle and wheeled vehicle and troop support equipment maintenance workloads from Red River. In comparing the cost and benefits of consolidating these workloads at other DOD depots with privatizing, the analyses should include the impact on recurring cost of existing workloads at the depots that would receive the workloads. We recommend that the Secretary of Defense review the results of the Army’s cost analysis for tactical missile maintenance to determine the most cost-effective course of action. If the consolidation option is determined to be the most cost-effective, the Secretary should reassess the Army’s interpretation of the BRAC recommendation and (1) if the reassessment determines that the consolidation is consistent with the BRAC recommendation to consolidate the entire tactical missile workload, except for disassembly and storage at Tobyhanna, DOD should do so or (2) if the reassessment determines that such a transfer is not consistent with the BRAC recommendation, DOD should seek redirection from Congress to accomplish this action. In commenting orally on our draft report, DOD officials generally agreed with our findings and recommendations regarding the Army’s plans to privatize depot maintenance. They stated that the Army’s plans were tentative and contingent on congressional relief from requirements of title 10, most notably, the 60/40 rule and the requirement for public-private competitions before privatizing depot workloads that exceed $3 million. They pointed out that the Army’s plans are being revised because Congress did not repeal or modify these statutes. They said that in revising these plans, DOD and the Army intend to meet the requirements of existing statutes governing DOD’s depot-level maintenance operations. They did not specify how the plans would meet the requirements. DOD officials also noted that the BRAC 1995 recommendation regarding Letterkenny did not provide for the transfer of all the missile maintenance mission work to the Tobyhanna depot or the private sector—only the missile guidance work—which they estimate to represent about 14 percent of the missile maintenance workload. Officials stated that the combination of the BRAC 1993 recommendation to consolidate tactical missile maintenance at the Letterkenny depot with the BRAC 1995 recommendation to transfer or privatize only the missile guidance workload precludes the Army from consolidating all missile depot maintenance workload at Tobyhanna. Accordingly, they believe there is no need to evaluate the cost-effectiveness of an option that the Army cannot implement. In our view, the BRAC recommendation is sufficiently imprecise to support a variety of interpretations to include the Army’s proposal, as well as the consolidation of all tactical missile depot maintenance at Tobyhanna. Notwithstanding this point, there is nothing that precludes the Army from assessing the cost-effectiveness of the various alternatives, whether they are implemented or not. Consequently, we have modified the recommendation in our draft report and are now recommending that the Secretary of Defense direct the Secretary of the Army to make such a cost analysis. In addition, we are recommending that the Secretary of Defense review the Army’s cost analysis to determine the most cost-effective course of action and if necessary seek redirection from Congress to implement the most cost-effective action. Our review indicated that consolidation of the missile workload at Tobyhanna and elimination of depot maintenance activities at Letterkenny, with the exception of conventional ammunition storage and tactical missile disassembly and storage, would offer the Army a more cost-effective alternative than retaining the excess capacity at both underutilized depots. The continuation of depot maintenance work at Letterkenny, whether as a government-owned and operated maintenance depot or as a privatized operation, is not likely to achieve the savings that could be achieved through the closure of facilities and the elimination of overhead at one activity. We also believe that given the potential opportunities to reduce infrastructure and maintenance costs by consolidating the missile workload at Tobyhanna, DOD should further evaluate this option before the Army proceeds with a less cost-effective option. In other comments, DOD officials suggested that maximum potential capacity not be used to measure unused capacity at Army depots because it (1) is not equivalent to the current industrial capacity of a depot, (2) includes building space that lacks plant equipment, and (3) is useful only for determining if additional work can be accomplished in existing space with the transfer or purchase of equipment. They said that the use of maximum potential capacity inflates the excess problem at the depots. We believe that maximum potential capacity is an acceptable benchmark for measuring capacity utilization and cost impact of underutilized facilities. The services developed, certified, and submitted such data for use in the BRAC 1995 process. We believe this capacity measure is a conservative projection of excess capacity, since it is based on a 5-day, one 8-hour shift operation, while private sector industrial use is frequently 2 or 2-1/2 shifts. Further, other DOD measures of capacity are constrained by numbers of available personnel and provide little indication of potential capacity available through more cost-effective use of industrial facilities and equipment. Maximum potential capacity provides a reasonable basis for analyzing the potential capacity available for workload consolidation. DOD officials also noted that the Paladin self-propelled artillery vehicle workload cannot continue at the Letterkenny depot in a government facility after the year 2001 because the 1995 BRAC Commission directed this mission be realigned to the Anniston depot. We recognize that the BRAC 1995 recommendation for Paladin depot maintenance workload was to move it to Anniston, but delaying transfer until the year 2001 could increase the cost of overall depot maintenance operations and decrease the savings expected to be derived from workload consolidation. Based on other DOD oral comments, we made technical changes to the draft report for clarification of several points. In conducting our work, we obtained documents from and interviewed officials from the Office of the Secretary of Defense, Washington, D.C.; Army headquarters, Washington, D.C.; Army Materiel Command and Army Audit Agency, Alexandria, Virginia; Industrial Operations Command, Rock Island, Illinois; and Anniston, Letterkenny, Red River, and Tobyhanna Army Depots. While at these depots, we discussed programs that involved partnering and other joint ventures with depot officials and reviewed pertinent documentation on the planned use and the results of these programs. We did not evaluate the merits of these programs because they generally were small in number and relatively new. In addition, whenever possible, we relied on information previously gathered as part of our prior reviews of DOD’s depot maintenance operations. To evaluate the impact on excess capacity, we compared maximum potential capacity and programmed workload forecasts data, as certified to the Joint Cross Service Group for Depot Maintenance prior to the 1995 BRAC round. We determined current excess capacity percentages based on a comparison of maximum potential capacity and workload forecasts for fiscal year 1996. To assess the impact of planned workload reallocations, we compared maximum potential capacity to workload forecasts for fiscal year 1999, adjusting for (1) capacity that the Army plans to transfer to the Red River and Letterkenny depot communities, (2) planned reallocation of programmed workloads from the Letterkenny and Red River depots to the Anniston depot, and (3) planned privatization of Letterkenny and Red River workloads. To determine the impact of workload reallocation plans on operating costs for combat vehicles maintenance, we reviewed an economic analysis that the Army Materiel Command had prepared and a draft audit report of the analysis that the Army Audit Agency had prepared. To determine the impact of workload reallocation plans on future operating rates for electronic type items, we asked officials at the Tobyhanna depot to compute operating costs based on the their pre-BRAC workload forecasts and supplemented by possible transfers of 1.5 million direct labor hours of tactical missile workload from the Letterkenny depot and 1.2 million hours of ground communications and electronics workload from the Sacramento Air Logistics Center. To determine the cost-effectiveness of the Army’s planned privatization plans, we held discussions with responsible Army officials and reviewed available documentation from the Army Materiel Command and its Industrial Operations Command. We could not fully evaluate these plans because the Army had not completed its analyses of all the privatization initiatives. Given that our analysis was constrained by the preliminary nature of some Army plans and the absence of some cost data, our analysis is based on assumptions that may change as better data become available. For DOD compliance with statutory requirements, we identified the applicable requirements and determined their impact on DOD’s plans to privatize depot-level maintenance workloads. We conducted our review between February 1996 and July 1996 in accordance with generally accepted government auditing standards. We are sending copies of this letter to the Secretaries of Defense, the Army, and the Air Force; the Director of the Office of Management and Budget; and interested congressional committees. Copies will be made available to others upon request. If you would like to discuss this matter further, please contact me at (202) 512-8412. Major contributors to this letter are listed in appendix I. Navy Depot Maintenance: Cost and Savings Issues Related to Privatizing-in-Place the Louisville, Kentucky Depot (GAO/NSIAD-96-202, Sept. 18, 1996). Defense Depot Maintenance: Commission on Roles and Mission’s Privatization Assumptions Are Questionable (GAO/NSIAD-96-161, July 15, 1996). Defense Depot Maintenance: DOD’s Policy Report Leaves Future Role of Depot System Uncertain (GAO/NSIAD-96-165, May 21, 1996). Defense Depot Maintenance: More Comprehensive and Consistent Workload Data Needed for Decisionmakers (GAO/NSIAD-96-166, May 21, 1996). Defense Depot Maintenance: Privatization and the Debate Over the Public-Private Mix (GAO/T-NSIAD-96-146/148, Apr. 16/17, 1996). Military Bases: Closure and Realignment Savings Are Significant, but Not Easily Quantified (GAO/NSIAD-96-67, Apr. 8, 1996). Depot Maintenance: Opportunities to Privatize Repair of Military Engines (GAO/NSIAD-96-33, Mar. 5, 1996). Closing Maintenance Depots: Savings, Personnel, and Workload Redistribution Issues (GAO/NSIAD-96-29, Mar. 4, 1996). Navy Maintenance: Assessment of the Public-Private Competition Program for Aviation Maintenance (GAO/NSIAD-96-30, Jan. 22, 1996). Depot Maintenance: The Navy’s Decision to Stop F/A-18 Repairs at Ogden Air Logistics Center (GAO/NSIAD-96-31, Dec. 15, 1995). Military Bases: Case Studies on Selected Bases Closed in 1988 and 1991 (GAO/NSIAD-95-139, Aug. 15, 1995). Military Base Closure: Analysis of DOD’s Process and Recommendations for 1995 (GAO/T-NSIAD-95-132, Apr. 17, 1995). Military Bases: Analysis of DOD’s 1995 Process and Recommendations for Closure and Realignment (GAO/NSIAD-95-133, Apr. 14, 1995). Aerospace Guidance and Metrology Center: Cost Growth and Other Factors Affect Closure and Privatization (GAO/NSIAD-95-60, Dec. 9, 1994). Navy Maintenance: Assessment of the Public and Private Shipyard Competition Program (GAO/NSIAD-94-184, May 25, 1994). Depot Maintenance: Issues in Allocating Workload Between the Public and Private Sectors (GAO/T-NSIAD-94-161, Apr. 12, 1994). Depot Maintenance (GAO/NSIAD-93-292R, Sept. 30, 1993). Depot Maintenance: Issues in Management and Restructuring to Support a Downsized Military (GAO/T-NSIAD-93-13, May 6, 1993) Air Logistics Center Indicators (GAO/NSIAD-93-146R, Feb. 25, 1993). Defense Force Management: Challenges Facing DOD as it Continues to Downsize its Civilian Workforce (GAO/NSIAD-93-123, Feb. 12, 1993). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Army's plans to reallocate depot maintenance workloads from depots recommended for closure or realignment by the Defense Base Realignment and Closure (BRAC) Commission, focusing on the: (1) impact on excess depot capacity and operating costs at the remaining defense depots; (2) cost-effectiveness of planned privatization options; and (3) Army's compliance with statutory requirements. GAO found that: (1) deciding the future of the Department of Defense (DOD) depot system is difficult; (2) depot maintenance privatization should be approached carefully, allowing for evaluation of economic, readiness, and statutory requirements that surround individual workloads; (3) privatizing depot maintenance activities, if not effectively managed, including the downsizing of remaining DOD depot infrastructure, could exacerbate existing capacity problems and the inefficiencies inherent in underuse of depot maintenance capacity; (4) privatization-in-place does not appear to be cost-effective given the excess capacity in DOD's depot maintenance system and the private sector; (5) tentative plans to transfer some workloads from realigned depots to remaining depots should improve capacity use and lower operating costs to some extent, but they will not resolve the Army's extensive excess depot capacity problems; (6) since the Army is not effectively downsizing its remaining depot maintenance infrastructure, privatization initiatives outlined in DOD's March 1996 workload analysis report to Congress will increase excess capacity in Army depots from 42 percent to 46 percent and increase Army depot maintenance costs; (7) privatizing-in-place will also aggravate excess capacity conditions in the private sector; (8) it is not clear how the Army intends to comply with statutory requirements such as 10 U.S.C. 2469, which requires the use of competitive procedures before privatizing depot maintenance workloads valued at not less than $3 million; (9) the Army's plans for reallocating depot workloads are still evolving; (10) the Army has not demonstrated that depot privatization initiatives relating to the 1995 depot closure and realignment decisions are cost-effective; (11) the Army's use of a privatization savings assumption of 20 percent is not supported; (12) in the absence of further downsizing, opportunities exist to significantly reduce Army depot maintenance costs by transferring, rather than privatizing-in-place, workloads from closing and downsizing depots; and (13) workload transfers will improve utilization and decrease costs of operations at remaining facilities.
Private sector participation and investment in transit is not new. In the 1800s, the private sector played a central role in financing early transportation infrastructure development in the United States. For example, original sections of the New York City Subway were constructed from 1899 to 1904 by a public-private partnership. New York City sought private sector bids for the first four contracts to construct and finance segments of the initial subway system. Ultimately, a 50-year private sector lease to operate and maintain the system was used. Another example is the City of Chicago’s “L” transit system, which was built from the 1880s through the 1920s and operated by the Chicago Rapid Transit Company, a privately owned firm. The construction of the system was financed by the private sector. In following years, transportation infrastructure development became almost wholly publicly funded. Conditions placed on federal transportation grants-in-aid limited private involvement in federally funded projects. More recently, there has been a move back towards policies that encourage more private and public blending of funding, responsibility, and control in transportation projects. The federal government has progressively relaxed restrictions on private participation in highway and transit projects serving public objectives. This change in federal policy toward considering transit projects that use alternative approaches has also created an opportunity for states to reexamine their own public-private partnership policies. Conventional transit projects generally follow a “design-bid-build” approach whereby the project sponsor contracts with separate entities for the discrete functions of a project, generally keeping much of the project responsibility and risk with the public sector. FTA defines alternative approaches, including public-private partnerships, as those that increase the extent of private sector involvement beyond the conventional design- bid-build project delivery approach. These alternative approaches contemplate a single private sector entity being responsible and financially liable for performing all or a significant number of functions in connection with a project. In transferring responsibility and risk for multiple project elements to the private sector partner, the project sponsor often has less control over the procurement and the private sector partner may have the opportunity to earn a financial return commensurate with the risks it has assumed (see fig. 1). With these alternative approaches, many of the project risks that would normally be borne by the project sponsor in a design-bid-build approach are transferred to or shared with the private sector. Risk transfer involves assigning responsibility for a project risk in a contract so that the private sector is accountable for nonperformance or errors. Project sponsors can transfer a range of key project risks to the private sector, including those related to design, financing, construction performance and schedule, vehicle supply, maintenance, operations, and ridership. For example, design risk refers to whether an error causes delays or additional costs, or causes the project to fail to satisfy legal or other requirements. Ridership risk refers to whether the actual number of passengers on the transit system reaches forecasted levels. However, some risks may not be transferable. Much of the federal government’s share of new capital investment in mass transportation has come through FTA’s New Starts program. Through the New Starts program, FTA identifies and recommends new fixed-guideway transit projects—including heavy, light, and commuter rail, ferry, and certain bus projects—for federal funding. Over the last decade, the New Starts program has resulted in funding state and local agencies with over $10 billion to help design and construct transit projects throughout the country and is FTA’s largest capital grant program for transit projects. Moreover, since the early 1970s, a significant portion of the federal government’s share of new capital investment in mass transportation has been initiated through the New Starts process, resulting in full funding grant agreements. FTA must prioritize transit projects for funding by evaluating, rating, and recommending potential projects on the basis of specific financial commitment and project justification criteria. Using criteria set by law, FTA evaluates potential transit projects and assigns ratings to them annually. These evaluation criteria reflect a range of benefits and effects of the proposed project, such as cost-effectiveness, as well as the ability of the project sponsor to fund the project and finance the continued operation of its transit system. FTA uses the evaluation and rating process to decide which projects to recommend to Congress for funding. As part of the New Starts process, FTA approves projects into three phases: preliminary engineering (in which the designs of project proposals are refined), final design (the end of project development in which final construction plans and cost estimates, among other activities, are completed), and construction (in which FTA awards the project a full funding grant agreement, providing a federal commitment of funds subject to the availability of appropriations) (see fig. 2). We have previously identified FTA’s New Starts program as a model for other federal transportation programs because of its use of a rigorous and systematic evaluation process to distinguish among proposed New Starts investments. However, we and other stakeholders and policymakers have also identified challenges facing the program. Among these challenges is the need to streamline the New Starts project approval process. Our past reviews, for example, found that many project stakeholders thought that FTA’s process for evaluating New Starts projects was too time consuming, costly, and complex. The New Starts grant process is closely aligned with the conventional design-bid-build approach, whereby the project sponsor contracts with separate entities for the design and construction of the project. In 2005, Congress authorized FTA to establish the Public-Private Partnership Pilot Program to demonstrate (1) the advantages and disadvantages of transit projects that use alternative approaches for new fixed-guideway capital projects and (2) how FTA’s New Starts program can be modified or streamlined for these alternative approaches. The pilot program allows FTA to study projects that incorporate greater private sector involvement through alternative project delivery and financing approaches; integrate a sharing of project risk; and streamline design, construction, and operations and maintenance. FTA can designate up to three project sponsors for the pilot program. Projects selected under the pilot program will be eligible for a simplified and accelerated review process that is intended to substantially reduce the time and cost to the sponsors of New Starts projects. This can include major modifications of the requirements and oversight tools. For example, FTA may offer concurrent project approvals into preliminary engineering and final design. Further, FTA may modify its risk-assessment process—which aims to identify issues that could affect a project’s schedule or cost—as well as other project reviews. The modification of any of FTA’s New Starts requirements and oversight tools will be on a case-by-case basis if FTA determines enough risk is transferred to and equity capital is invested by the private sector. In addition to major modifications, FTA may also make use of other tools (not unique to the pilot program) to expedite the review process. These include Letters of No Prejudice that allow a project sponsor to incur costs with the understanding that these costs may be reimbursable as eligible expenses (or eligible for credit toward the local match) should FTA approve the project for funding at a later date. FTA can also use Letters of Intent to signal an intention to obligate federal funds at a later date when funds become available. Finally, Early Systems Work Agreements obligate a portion of a project’s federal funding so that project sponsors can begin preliminary project activities before a full funding grant agreement is awarded. FTA has employed a contractor to determine whether risk is effectively transferred from the public to private sector for its pilot program projects, and will consider private sector due diligence as a substitute for its own. From a public perspective, an important component of analyzing the potential benefits and limitations of greater private sector involvement is consideration of the public interest. Although, in transportation, no definition of public interest exists at the federal level, nor does federal guidance identify public interest considerations in transportation, consideration of the public interest in transit may refer to the many stakeholders in public-private partnerships, each of which may have its own interests. Stakeholders include public transit authorities, transit agency employees, mass transit users and members of the public who may be affected by ancillary effects of a transit public-private partnership or alternative project delivery approach, including users of bus and highways, special interest groups, and taxpayers in general. Moreover, defining the public interest is a function of scale and can differ based on the range of stakeholders in addition to the geographic and political domain considered. For the purposes of its pilot program, FTA has stated that the public interest refers to the due diligence that FTA typically conducts as a public entity with a financial interest in a transit project. In the United States, the private sector has played a more limited role in the delivery and financing of transit projects than in some other countries. Since 2000, seven New Starts projects were completed using alternative approaches (see table 1). These projects have focused on delivery, rather than financing, and have used either the design-build or the design-build- operate-maintain delivery approach, in which the private sector role is to design and construct the project or to design, construct, operate, and maintain the project, respectively. In addition, to date, no completed New Starts projects have been privately financed and therefore, none of these projects have used private equity financing. However, there have been very few examples of completed non-New Starts-funded new fixed-guideway projects that have been privately financed. One project, the Las Vegas Monorail, a 4-mile fixed-guideway system serving the resort corridor along Las Vegas Boulevard in Nevada, was financed with tax-exempt revenue bonds issued through the state of Nevada and with contributions from the area resorts and hotels. As previously mentioned, Congress authorized FTA to establish its Public- Private Partnership Pilot Program to demonstrate the advantages and disadvantages of these approaches in transit. As established, the pilot project studies those projects that use alternative approaches that integrate a sharing of project risk and incorporate private equity capital in order to illustrate where FTA can grant greater flexibility of some of its New Starts requirements to projects within the pilot program. However, to date, only one of the pilot projects is expected to incorporate private equity capital. FTA designated three project sponsors for its Public-Private Partnership Pilot Program in 2007: Bay Area Rapid Transit—The Oakland Airport Connector project is to be a 3.2-mile system that will connect the Oakland International Airport to the Bay Area Rapid Transit’s Coliseum Station and the rest of the transit system. In its original iteration, the Oakland Airport Connector planned on using a design-build-finance-operate-maintain project delivery approach that included private sector financing. However, lower-than-expected ridership predictions due to the economic climate, among other factors, led Bay Area Rapid Transit to move forward with a different alternative approach for its project— now design-build-operate-maintain—and undergo a new request for qualified bidders and request for proposals process. According to Bay Area Rapid Transit, a contract will be awarded in December 2009. Metropolitan Transit Authority of Harris County (Houston Metro)—North and Southeast Corridor projects are to provide improved access to Houston’s Central Business District. This project was also originally to use a design-build-finance-operate-maintain approach that included private sector financing, but no bidders on the project proposed an equity investment, so it is instead using a design- build-operate-maintain approach. Issues related to price and risk transference led Houston Metro to switch private partners and the new partner chose not to provide financing for the project. Groundbreaking for the construction of the two projects occurred in July 2009. Denver Regional Transportation District—East Corridor and Gold Line pilot projects are to connect the city’s main railway station with its airport and other parts of the city. The project is using a design- build-finance-operate-maintain approach, which includes financing by the private sector partner. The private sector partner will be selected through a competitive proposal process to deliver and operate the project under a long-term agreement. In September 2009, Denver Regional Transportation District released a request for proposals to prequalified teams. One ongoing New Starts project did not apply to be part of the pilot program, but is using an alternative approach. The Dulles Silver Line is using the design-build approach with partial funding of the local share coming from area businesses generated through a tax-increment financing district to connect Washington, D.C., metropolitan area’s transit system with one of the area’s three major airports. In contrast, international project sponsors have delivered transit projects using a wider range of alternative approaches, including public-private partnerships, beyond the more commonly used design-build in the United States (see table 2). According to World Bank officials and a World Bank- sponsored report, transit public-private partnerships have been implemented in Australia, Brazil, Canada, France, Hong Kong, Malaysia, the Philippines, South Africa, Thailand, and the United Kingdom. Furthermore, international project sponsors have incorporated private equity investment financing for some of their projects. According to World Bank officials, the United Kingdom and Canada are leading countries for private equity investment in transit, and the United Kingdom has the most experience using different public-private partnership models. International projects also generally require a government subsidy to supplement farebox revenues for construction as well as operations and maintenance. Examples of several projects in the United Kingdom and Canada that we reviewed include the following: The Docklands Light Railway serves a redevelopment area east and southeast of London. Transport for London, the public sector project sponsor, used three separate design-build-finance-maintain concession agreements to construct system extensions as well as a single franchise to operate trains over the entire system. All three extensions were financed in part or full using private equity investment, and the Lewisham Extension was the United Kingdom’s first transportation public-private partnership for both project delivery and financing. The Croydon Tramlink light rail project was a 99-year design- build-finance-operate-maintain agreement to develop the new system. In this project, payments to the private sector partner during operations were based entirely on ridership revenue, but the project sponsor retained the authority to set fares. The private sector partner faced financial difficulties, and the concession was ultimately bought by Transport for London. The Manchester Metrolink Phase II light rail project was a 17- year concession agreement wherein the private partner had responsibility to design, construct, finance, operate, and maintain this project. The project was designed to expand the Metrolink System in order to connect two of the city’s existing stations. The private partner provided over one-half of the project’s funding for construction. The public sector terminated the concession to further expand the system. The London Underground maintenance projects included agreements entered into between London Underground and two private sector partners to maintain and upgrade the system’s infrastructure, including track, tunnels, trains, and stations. In return, the private sector would receive periodic payments based on its performance. One of the two private sector partners subsequently went bankrupt, and the concession agreement was then taken over by Transport for London. The Nottingham Express Transit light rail project used a 27-year contract to design, build, finance, operate, and maintain a new transit line. Payments to the private sector were based on performance and ridership revenue, meaning that the private sector assumed some risk that actual ridership would not reach forecasted levels. Along with this transfer of risk, the private sector was also given the ability to set fares. The project is in the ninth year of its contract. The Canada Line light rail project in the Vancouver area is a 35- year design-build-finance-operate-maintain concession agreement developed to link Vancouver with its international airport and neighboring employment and population centers in anticipation of the 2010 Winter Olympics. A separate entity was created to oversee the project’s development and the private partner provided one-third of the project’s funding, including private equity capital, in exchange for periodic payments based on performance and ridership. FTA’s pilot program is expected to demonstrate potential benefits to using alternative approaches in transit. Project sponsors we interviewed cited a range of potential benefits, such as achieving cost and time savings, as well as potential advantages to the public sector, such as increased financing flexibility (see table 3). DOT outlined some of these same benefits and advantages in its 2007 Report to Congress on transit public- private partnerships and we similarly reported on them in 2008 for highway public-private partnerships. However, as we said then, benefits are not assured and should be evaluated by weighing them against potential costs and trade-offs. Among the benefits from using alternative approaches, project sponsors told us that they may better meet cost and schedule targets as well as achieve cost and time savings by transferring risks to the private sector. With transit projects that use alternative approaches, project sponsors can transfer a range of key project risks to the private sector, such as those related to design and its effect on construction that would normally be borne by the project sponsor, so that the private sector is accountable for errors or nonperformance. By transferring these project risks, the project sponsor creates incentives for the private sector to keep the project on schedule and on budget as, for example, the private sector would be responsible for any excess costs incurred from design errors. In addition, when a project sponsor transfers multiple project risks to the private sector, it can potentially reduce the total cost and duration since a single contractor can concurrently perform project activities that would typically be carried out consecutively by multiple contractors under the conventional design-bid-build approach. Project sponsors, stakeholders, and transit experts we interviewed told us that potential cost and time savings can be key incentives for using alternative approaches. For example, FTA reported that Minnesota Metro Transit’s Hiawatha Corridor (one of the seven completed New Starts projects that used an alternative approach) was completed 12 months ahead of schedule compared to using the conventional design-bid-build approach by allowing design and construction schedules to overlap. This saved an estimated $25 million to $38 million since early completion led to avoided administration costs using a design-build alternative approach. Denver Regional Transportation District and the private sector completed the Transportation Expansion project 22 months ahead of schedule and within budget. In the United Kingdom, the three Docklands Light Railway extensions were built using design-build-finance-maintain approaches, and were completed 2 weeks to 2 months ahead of schedule. However, the use of alternative approaches does not guarantee cost and schedule benefits. For example, the design-build approach used by the South Florida Commuter Rail Upgrades saved 4 to 6 years by completing all upgrades as a single project, but incurred slightly higher costs than estimated for the conventional design-bid-build approach. Project sponsors may be able to benefit from certain efficiencies and service improvements by transferring long-term responsibility of transit operations and maintenance in addition to design and construction to the private sector. DOT’s 2007 Report to Congress on transit public-private partnerships stated that the private sector may be able to add value to transit projects through improved management and innovation in a project’s construction, maintenance, and operation. Project sponsors and stakeholders we interviewed stated that alternative approaches promote the use of performance measures (such as train capacity and frequency) rather than specific design details (such as the type of train). This allows the private sector to potentially generate and apply innovative solutions in the design of the transit system, adding value to the project. For example, because Denver Regional Transportation District’s Transportation Expansion Light Rail project (another of the seven New Starts projects) used a design-build approach, a lessons-learned report following the project’s completion stated that the project sponsor was able to incorporate 198 design modifications identified by the private sector partner during development to improve overall quality of the transit system while remaining on budget. A conventional design-bid-build contract is generally not flexible enough to allow for such design modifications without additional costs because contracts often specify the use of technical or other specifications. When the long-term responsibilities of transit operations and maintenance are transferred, the private sector potentially has a greater incentive to make efficient design decisions. This is because the private sector can be held responsible for the condition of a transit project for longer durations than under the conventional design-bid-build approach. Houston Metro officials told us that for an earlier project that used the conventional design-bid-build approach, the project’s warranty terms did not hold the construction firm responsible long enough to cover defects such as faulty track and concrete. As a result, Houston Metro had to file claims to remedy these defects. Houston Metro officials stated they chose to build its North and Southeast Corridor pilot project using design-build-operate- maintain contract in part to hold the private sector entity responsible for the quality of the project’s construction for a longer period of time. A greater private sector role in transit projects can also potentially offer certain advantages to the public sector, including increased financial flexibility and more predictable operations and maintenance funding. For example, Denver Regional Transportation District officials said that they will make payments tied to operations to the private sector over a number of years to, in part, pay for the private sector’s partial financing for the East Corridor and Gold Line pilot projects. By using the design-build- finance-operate-maintain approach, Denver may have more financing flexibility by potentially extending the payments 20 years longer than if a bond were used and the private sector were not involved in financing the project. With a longer payment period, project stakeholders told us that the transit agency could conserve funds in the short term to help it construct other new transit projects on time. Additionally, alternative approaches may help ensure more predictable funding for maintenance and operations since these activities can be subject to unpredictable public sector budget cycles under the conventional design-bid-build approach. Because alternative approaches for transit projects may include operations and maintenance standards in the contract, the private sector might be responsible to fund these activities within the overall contract price. FTA’s pilot program is also expected to demonstrate the potential limitations to using alternative approaches in transit, including some of those addressed in DOT’s 2007 Report to Congress on transit public- private partnerships (see table 4). One limitation is that some project risks should not be transferred to the private sector. For example, it may be too costly for project sponsors to transfer certain risks, such as ridership and environmental remediation, because the private sector may want to charge an additional premium to take them on. Ridership risk refers to whether the actual number of passengers achieves forecasted levels. According to officials we interviewed, environmental remediation risk refers to whether the cleanup of hazardous materials and other conditions at a project site leads to increased project costs or schedule delays, and can encompass conditions that are identified as well as those that are not identified during surveys of a project site. Past experience in projects demonstrates the difficulty of transferring these risks to the private sector. According to officials we interviewed, ridership risk may be difficult to transfer to the private sector if transit project sponsors are reluctant to forfeit full fare-setting authority. For example, Denver Regional Transportation District chose not to transfer ridership risk for its East Corridor and Gold Line pilot projects given that it wanted to retain the right to set fares in order to keep fares uniform systemwide. Another example is the United Kingdom’s Croydon Tramlink project, which transferred ridership risk but not the ability to set fares. Officials we interviewed stated that the private partner progressively faced financial difficulties due to low ridership revenue, which led to the collapse and ultimate buyback of the partnership by Transport for London. Additionally, if a transit project is built as an extension of an existing system, the private sector partner may not want to operate a single segment of a publicly owned system. According to officials, private investors are reluctant to assume ridership risk of any portion of a system operated by an entity they do not control. These officials said that in many cases, the private sector partner would need the authority to increase or decrease transit fares based on ridership trends and the number of transit users to assume greater ridership risk. However, because raising fares involves political considerations, including equity for low-income transit users, officials told us that most project sponsors retain the right to set fares and are unwilling to forfeit fare-setting control. Some project sponsors that have tried to transfer ridership risk while retaining fare-setting authority have run into difficulties. According to project sponsors and transit experts, the Bay Area Rapid Transit’s Oakland Airport Connector project initially tried but ultimately was unable to transfer ridership risk in part because the private sector concessionaire (under the project’s original iteration) would not have fare-setting authority. This was also the case with the Canada Line, where the agreement was structured to incorporate a limited transfer of ridership risk to the private sector partner. Although the project sponsor wanted to transfer full ridership risk to the concessionaire, it learned that private investors would not finance a deal with full ridership risk transfer due to their inability to control factors that influence ridership such as transit fares. As such, the project sponsor decided to transfer limited ridership risk to the private sector by basing 10 percent of its payments to the private sector partner during operations and maintenance on ridership figures. According to project sponsors, this transfer of ridership risk was done to induce the concessionaire to increase ridership by providing quality customer service. Officials we interviewed also stated that environmental remediation risks may be difficult to transfer to the private sector because of the additional premium the private sector charges to address unknown factors. Denver Regional Transportation District originally planned to transfer all environmental remediation risk for its East Corridor and Gold Line pilot projects’ long-term design-build-finance-operate-maintain concession. This caused the private sector to estimate a $25 million charge for taking on this risk, according to Denver Regional Transportation District officials we interviewed. When the project sponsor decided to retain one aspect of the environmental risk related to several unknown remediation elements, the private sector dropped the cost estimate of transferring the remaining environmental risk from $25 million to $9 million. Moreover, as we have previously reported regarding highway public-private partnerships, it may be inefficient and inappropriate for certain risks to be transferred to the private sector due to the costs and risks associated with environmental issues. Permitting requirements and other environmental risks may become too time-consuming and costly for the private sector to address and may best be retained by the public sector given its stewardship role within the government. According to officials we interviewed, although the Canada Line’s concession agreement transferred all key construction risks (i.e., cost overruns) to the private sector, the public authority retained risks associated with permitting and other environmental risks such as unknown contaminated soils. Further, for one early highway public- private partnership in California, the project sponsor attempted to transfer environmental permitting risk to the private sector. However, the private sector partner spent more than $30 million dollars over a 10-year period and never obtained final approval to proceed with construction. Another potential limitation in transit projects that use alternative approaches is the project sponsor’s loss of control and reduced flexibility in transit operations. Because the transit project sponsor enters into a contractual agreement that gives the private partner a greater decision- making role, the project sponsor may lose some control over its ability to modify existing assets or implement plans to accommodate changes over time such as extensions, service changes, and technology upgrades. For example, in the United Kingdom, the project sponsor for Manchester Metrolink had to break two existing public-private partnership concession agreements to accommodate extensions to its system. Consultants to the Manchester project told us that breaking a concession agreement can be very expensive and can damage the relationship between the project sponsor and the private sector partner. Similarly, to accommodate increased ridership, the project sponsor for Docklands Light Railway decided to build platform expansions. However, the private sector partner was not willing to take on this additional work, requiring the project sponsor to take the extra steps to hire another party to build the platform extensions and negotiate the handover of the platforms to the private sector partner for maintenance. Transit projects that use alternative approaches may also introduce transaction costs to the project sponsor through legal, financial, and administrative fees in addition to higher-priced financing in cases where the transit project is privately financed. According to officials we interviewed, transit public-private partnerships often require the advisory services of attorneys, financial experts, and private consultants to successfully execute the steps necessary to finalize the project’s agreement. These additional services and transaction fees represent additional public sector costs that the conventional project delivery approach may not necessarily require. For example, the project sponsor for the London Underground spent the equivalent of $112 million or approximately 1.1 percent of the concession agreement’s total price to cover legal expenses, financial services, and administrative fees. Officials we interviewed also stated that Denver Regional Transportation District anticipates spending $15 million in advisory fees for its East Corridor and Gold Line pilot projects’ request for proposals submittals. In addition to transaction costs, public-private partnerships incur added costs when the private sector provides the financing for the project. The municipal bond market in the United States generally provides public transit agencies a cheaper source of funding because they can borrow more cheaply than the private sector. Officials also stated that the effects of the recent economic recession and failed credit markets have stymied the private sector’s ability to raise revenues and provide affordable long-term debt for large transit projects due to tight lending conditions. While we have previously identified FTA’s New Starts grant program— which funds new, large-scale transit projects—as a model for other federal transportation programs because of its use of a rigorous and systematic evaluation process to distinguish among proposed investments, the New Starts project approval process is not entirely compatible with transit projects that use alternative approaches in that the process is sequential and phased with approvals granted separately and at certain decision points. Therefore, the New Starts process serves as a potential barrier because transit projects that use alternative approaches often rely on the concurrent completion of project phases to meet cost and schedule targets and to accrue savings and other potential benefits. Congress recognized New Starts as a potential barrier, as it authorized FTA to establish a Public-Private Partnership Pilot Program in part to identify ways to streamline the process. According to DOT’s 2007 Report to Congress as well as project sponsors, their advisors, and private sector partners, the New Starts project approval process, while appropriate for the type of transit projects that have been developed over several decades, poses particular challenges for project sponsors using alternative approaches for their transit projects. The challenges they raised include (1) delays, (2) additional costs, and (3) the loss of other potential benefits, such as enhanced efficiencies and improved quality. The sequential and phased New Starts project approval process can create schedule delays as project sponsors await federal approval. The amount of time it takes for FTA to determine whether a project can advance can be significant. A 2007 study on the New Starts program by Deloitte, commissioned by FTA to review the New Starts process and identify opportunities for streamlining or simplifying the process, found that the New Starts process is perceived by project sponsors as intensive, lengthy, and burdensome. The Deloitte study found that FTA’s prescribed review times of 30 and 120 days for entry into the preliminary engineering and final design phases, respectively, are apparently arbitrary and actual review times are generally longer. In particular, the study found that FTA’s risk-assessment process delayed project development. Consultants to the Dulles Silver Line project sponsor told us that through the New Starts process, FTA has complete control over a project’s schedule, and project sponsors have to put project work on hold while waiting for FTA’s approval to advance into the next project phase. They also told us that construction activities on the Dulles Silver Line could not begin until the approval of a full funding grant agreement—as design and construction activities cannot be completed at the same time—and so some of the time- savings benefits of the design-build approach were lost. For the East Corridor and Gold Line pilot projects, Denver Regional Transportation District officials also told us that since enough design work will be completed during the New Starts preliminary engineering phase to request bids from the private sector, no additional design work is needed during final design and construction of the project. However, Denver officials said that, as required by New Starts, they will again prepare the design documentation for the final design and full funding grant agreement approval phases, potentially contributing to schedule delays. FTA officials told us that the resubmission of the documentation is necessary because the private sector can bid to provide something different than what was agreed upon under preliminary engineering. Houston Metro’s private sector partner told us it would like to begin some construction activities on the North and Southeast Corridors, but will not be able to begin until a full funding grant agreement is awarded. As a result, the private sector partner has to delay its work until the funding process is completed. FTA officials responded that they allowed Houston Metro to carry out some construction activities in advance of their receiving a full funding grant agreement. Moreover, Houston Metro officials told us that FTA required them to submit and resubmit entire project documents to FTA multiple times, which led to delays. FTA officials told us the length of time for reviews depends on a number of factors, most importantly the completeness and accuracy of the project sponsor’s submissions, and that project sponsors could help to avoid such delays by improving their submissions. For example, FTA officials stated that Houston Metro’s projects have changed repeatedly, thus requiring multiple submittals. In addition to the costs of delays, the design of the New Starts project approval process—which is closely aligned with the conventional design- bid-build approach—may also contribute to additional project costs borne by the public sector when other alternative approaches are used. Project sponsors and other stakeholders for Denver Regional Transportation District’s East Corridor and Gold Line pilot projects told us that the private sector must maintain its financial commitment to a project for up to several months to allow for FTA, Office of Management and Budget, and congressional review of the full funding grant agreement. For example, Denver Regional Transportation District officials anticipate adhering to the sequential and phased New Starts approach to its project in order to accommodate delays from waiting for the reauthorization of the existing transportation bill, the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users, and awarding a full funding grant agreement for the project. However, Denver Regional Transportation District officials told us that following this approach will likely increase the cost of the project. FTA officials told us that these additional costs stem from a lack of funding available in a surface transportation authorization period rather than FTA’s New Starts requirements. Additionally, for the Dulles Silver Line, tax-increment financing funding— funding from incremental tax revenue increases generated by new construction or rehabilitation projects around the new transit line—was a major funding source for the project, contributing up to $400 million to the $2.6 billion project. The Duller Silver Line project consultants told us that the project risked losing the tax increment financing funding as it took 5 years to receive a full funding grant agreement when the project sponsor originally estimated that it would take 2 to 3 years. FTA officials stated that several factors, including the decision to reexamine a tunnel option, contributed to challenges surrounding the Dulles Silver Line. FTA’s New Starts project approval process may also limit other potential benefits, such as enhanced efficiencies and design improvements, when transit projects use alternative approaches. For example, Denver Regional Transportation District officials told us that the New Starts project approval process requires that specific design details be included and that this requirement can prohibit a project sponsor from instead leaving such design specifications to the private sector, thus possibly limiting the ability to find innovative and cost-effective solutions for the project. When a project sponsor specifies the exact number of vehicles for the project, the private sector partners must incorporate that design detail into their scope, whether or not that exact number of vehicles is really needed. Due to the New Starts requirements, another project sponsor told us that it had been discouraged from using an alterative project delivery approach again after having what it believed to be a prior successful experience that included enhanced efficiencies and design improvements. A Minnesota Metro Transit official told us it initially wanted to use the design-build approach for its ongoing Central Corridor project based on the success of previously using this approach for the Hiawatha Corridor—a completed New Starts project that received a full funding grant agreement in 2000. However, Minnesota Metro Transit determined that it would have to complete 60 percent of the Central Corridor project’s design to meet FTA’s New Starts requirements for final design. DOT’s 2007 Report to Congress also cited a similar challenge regarding project design requirements. These requirements are not consistent with alternative approaches where project sponsors look to involve the private sector after only one-third, for example, of the design work is completed. Therefore, Minnesota Metro Transit decided to use the conventional design-bid-build approach to construct the project. In commenting on a draft of our report, FTA officials recognize that while additional steps could be taken to facilitate alternative approaches to transit projects, they also believe that other barriers beyond the federal approval process affect the use of these approaches, including those beyond the immediate reach of the program such as reduced available private equity capital resulting from the recent economic recession. To address these challenges of the New Starts project approval process for transit projects that use alternative approaches, Congress and FTA have taken steps to streamline New Starts by establishing the Public-Private Partnership Pilot Program. And to date, FTA has agreed to provide all three of the pilot program project sponsors with some level of relief, including expediting its risk assessment and providing Letters of No Prejudice earlier than traditionally allowed in the New Starts process to Houston Metro, and granting a waiver from federal performance bonding requirements to the Bay Area Rapid Transit Oakland Airport Connector pilot project, which FTA has also done for non-pilot program projects. FTA has also stated its amenability to waiving its risk assessment—which aims to identify issues that could affect a project’s schedule or cost—and financial reviews, concurrently approving the project into the New Starts final design phase while awarding an Early Systems Work Agreement for Denver Regional Transportation District’s East Corridor and Gold Line pilot projects. However, because FTA officials told us that none of the pilot projects has demonstrated a sufficient transfer of risk or financial investment by the private sector to enable FTA to relax its normal New Starts evaluation requirements for such approvals, FTA has yet to grant three pilot project sponsors any major streamlining modifications of the New Starts project approval process, such as the awarding of concurrent approvals into the New Starts phases. Thus far, FTA has only assessed the Houston Metro pilot project to determine the extent to which FTA could streamline the New Starts process. In its November 2008 report, FTA determined that it would not relax, modify, or waive its risk assessment and financial capacity reviews prior to advancement into final design because Houston Metro retains risks in a number of critical risk areas including finance since there is no equity capital investment by the private sector partner. Houston Metro officials said that they considered transferring more risk to the private sector to meet FTA’s threshold to waive certain New Starts evaluation requirements, but decided against doing so because of their concern that the private sector assuming certain risks to meet FTA’s threshold may potentially increase private sector bids and that they would still be able to achieve some of the benefits of using an alternative approach without equity capital investment by the private sector. While it may be too early for FTA to grant major streamlining modifications with the other two pilot projects, FTA still has the ability as part of its pilot program to further experiment with the use of existing tools that could encourage a greater private sector role while continuing to balance the need to protect the public interest. FTA has the ability to use conditional approvals in the New Starts process, such as (1) Letters of Intent that announce FTA’s intention to issue a full funding grant agreement that would in turn agree to obligate a New Starts project’s full federal share from future available budget authority, subject to the availability of appropriations, provided that a project meets all the terms of a full funding grant agreement and (2) Early Systems Work Agreements that obligate only a portion of a New Starts project’s federal share for preliminary project activities, such as land acquisition. Over the past 30 years, FTA has made very limited use of these tools by only granting three Letters of Intent and four Early Systems Work Agreements to transit projects. The Deloitte study noted that New Starts project sponsors miss the opportunity to use alternative methods including design-build and design-build-finance-operate-maintain because of the lack of early commitment of federal funding for the projects, suggesting that the greater use of these tools could be beneficial. However, use of these tools is not without risk. We have previously noted that limitations to FTA making greater use of these tools, including Letters of Intent, could be misinterpreted as an obligation of federal funds when they only signal FTA’s intention to obligate future funds. Furthermore, Early Systems Work Agreements require a project to have a record of decision for the environmental review process that must be completed under the National Environmental Policy Act and require the Secretary to find that a full funding grant agreement for the project will be made and that the agreement will promote more-rapid and less-costly completion of the project. Finally, under current statute, both of these tools—Letters of Intent and Early Systems Work Agreement—count against FTA’s available funding for New Starts projects under the current surface transportation authorization. We found that the governments of the United Kingdom and Canada use conditional approvals to help encourage a greater private sector role in transit projects. The United Kingdom’s Department for Transport grants a conditional approval announcing the government’s intent to fund a project before it receives private sector bids provided that cost, risk transference, and scope do not change. If those conditions are not met, the project loses its government funding. This conditional approval occurs after the department reviews projects, in part to address the risk of cost increases, and thus provides a signal of project quality to the private sector to help maintain a competitive bidding process. Similarly, Transport Canada officials told us that it makes a formal announcement to state its intent to provide federal funds to a transit project after conducting its initial review of a project and before formally committing funds that allow project sponsors to move forward in development and engaging the private sector. If the agreed-upon cost, schedule, and risk transference are not met, the government withdraws its funding. United Kingdom Department for Transport officials told us that they have experience withdrawing funding when such conditions have not been met. We also found that other U.S. Department of Transportation modal administrations use similar conditional approvals to help encourage greater private sector involvement in projects. The Federal Aviation Administration uses Letters of Intent in its Airport Improvement Program to establish multiyear funding schedules for projects that officials said allow project sponsors to proceed with greater certainty regarding future federal funding compared to the broader program and also help prevent project stops and starts. The Federal Aviation Administration has granted 90 of these multiyear awards since 1988. The Federal Highway Administration grants early conditional approvals to highway project sponsors seeking Transportation Infrastructure Finance and Innovation Act funds to streamline the process and allow private sector bidders to incorporate these funds into their financial plans without having to individually apply as otherwise required. The Federal Highway Administration has also carried out three pilot programs that have allowed projects to move more efficiently through its grant process by modifying some of its requirements. These pilot projects waived certain aspects of the federal-aid highway procurement provisions, such as moving forward with final decision prior to a National Environmental Policy Act decision, and allowed federally funded highway projects to use alternative approaches including design-build. One of these pilot programs is cited by the Federal Highway Administration as having helped pave the way for design-build to become the standard project delivery approach in highway projects. Another pilot program allowed the Federal Highway Administration to waive regulations and policies so project sponsors in two states could contract with the private sector at a much earlier point in the project development cycle than was previously allowed. In addition to not yet granting project sponsors any major streamlining modifications to the New Starts process, FTA does not have an evaluation plan to accurately and reliably assess the pilot program’s results, including the effect of its efforts to streamline the New Starts projects for pilot project sponsors. We have previously reported that to evaluate the effectiveness of a pilot program, a sound evaluation plan is needed and should incorporate key features including: well-defined, clear, and measurable objectives; measures that are directly linked to the program objectives; criteria for determining pilot program performance; a way to isolate the effects of the pilot program; a data analysis plan for the evaluation design; and a detailed plan to ensure that data collection, entry, and storage are reliable and error-free. Without such an evaluation plan, FTA is limited in its decision making regarding its pilot program, and Congress will be limited in its decision making about the pilot program’s potential broader application. FTA officials told us that they have not yet developed an evaluation plan for its pilot program given that the projects are all ongoing, far from completion, and still working through the New Starts project approval process. The alternative approaches we reviewed have protected the public interest in various ways to ensure the public receives the best price for a project and to create incentives for the private sector partner so that the project progresses and operates based on agreed-upon objectives. Project sponsors we interviewed have attempted in part to protect the public interest in transit projects that use alternative approaches by ensuring the use of competitive procurement practices. These practices are not unique to alternative approaches and are sometimes used in conventional procurements. Competitive procurement practices are generally required to be used for federal funding. For example, federal law and regulations generally require federal contracts to be competed unless they fall under specific exceptions to full and open competition. Nevertheless, project sponsors told us that maximizing the use of these competitive procurement practices—such as encouraging multiple bidders to value and price projects—helps to ensure that the public sector receives the best bid when using these partnerships and approaches. European Union countries are required to have multiple bidders for procurements. Procurements with only one bidder are less competitive and can result in less attractive bids. For example, although Bay Area Rapid Transit prequalified three contractors for the first version of its Oakland Airport Connector, two contractors withdrew during the negotiation period due to concerns about the project affordability. Bay Area Rapid Transit negotiated with the sole remaining bidder on costs for nearly a year but then let the Request for Proposals expire with no proposals submitted. To encourage the participation of multiple bidders, Minnesota Metro Transit Hiawatha Corridor and Denver’s Regional Transportation District’s Transportation Expansion light rail offered proposal stipends to private sector entities that submitted formal bids to help defray the costs of developing proposals. However, while serving as an incentive for potential private sector partners, stipends add costs that must be weighed against the benefits they provide. Project sponsors that we interviewed have also encouraged early and sustained interaction with the private sector to test the project’s marketability and whether and in what form private sector participation is advantageous. Such feedback can be obtained through bidder information sessions and from consultants. Project sponsors then conduct a request for qualified bidders to gain more detailed input from the private sector on a project prior to the issuance of a request for proposals (which solicits the formal bids). The request for qualified bidders can establish a higher threshold of responsibility for private partners compared to traditional procurements in which a private partner is selected based primarily on bid price. Thus, sustained and iterative interaction between the project sponsor and the private sector can refine the project’s scope and terms and determine how best to include the private sector. For example, all three of FTA’s pilot projects as well as Minnesota Metro Transit’s Hiawatha Corridor project used a request for qualifications to select bidders and solicit the private sector’s review of project details. In addition, Minnesota Metro Transit told us that input from the private sector produced several good ideas that were incorporated into the project, such as a shared risk fund to provide an incentive for the private sector to reduce construction delays. Furthermore, the Canada Line project sponsor used a list of essential elements agreed upon by the public agencies funding the project as a basis for negotiating with potential bidders. Project sponsors that we interviewed seek to protect the public interest in alternative approaches through an emphasis on performance. Performance specifications focus on desired project performance (such as frequency of train arrivals at a station) and not design details (such as the type of train). Project sponsors and consultants told us that detailed specifications that have been in conventional project delivery approaches can restrict what bidders can offer. When specifications are focused on performance, bidders can offer a range of design and technology options as well as follow best practices that meet overall project objectives. According to Denver’s Regional Transportation District, the East Corridor and Gold Line pilot projects initially had a 700-page design specification document for their commuter rail vehicles. After industry review and feedback that the specifications would lead to customized vehicles that would be expensive and difficult to operate and maintain, the project sponsor responded by creating a 15-page performance specifications document for the vehicles. An advisor to the project sponsor noted that the use of design specifications is more challenging with transit projects than in highways and other sectors given the technology issues and environmental concerns. The advisor also said that projects with a range of technology options must undergo the environmental review process at the highest possible level of design given the effect of different technologies on the environment. In contrast, one project sponsor noted that performance specifications should not be used when conditions of the facility or surrounding environment, for example, are unknown as unforeseen circumstances could occur that would require more specific design specifications. Project sponsors we interviewed have also sought to use performance standards to protect the public interest. These standards are what the private sector partner must meet to be compensated during the project’s construction, operations, and maintenance phases, helping to ensure adequate performance. If the private sector partner does not meet the standards, then it is penalized with no, reduced, or delayed payments, and penalties can escalate if poor performance continues. Standards for construction include delivering a completed project or project element within a set schedule. For example, the Canada Line private sector partner had 400 milestones that it needed to complete and have certified in order to continue to receive timely payments during the project’s construction period. Performance standards for operations and maintenance, also called key performance indicators, cover all aspects of service including the availability, frequency, and reliability of service and conditions of facilities. For example, the London Underground chose to emphasize key performance indicators in four areas—availability, capability, ambience, and service points—by creating performance targets and to tie monthly payments to these based on the private sector partner’s actual performance. Some projects have also incorporated standards that link to increased ridership to provide incentives for the private sector partner to provide good customer service. For example, Nottingham Express Transit has 20 percent of its payments to the private sector based on ridership. Additionally, the draft concession agreement for Denver’s Regional Transportation District East Corridor and Gold Line pilot projects incorporate levels of payment deductions that accelerate when low performance, such as delayed trains and littered or unclean railcars, persists. If low performance continues over a period, the project sponsor can terminate the concession agreement and rebid the project to another private partner. Project sponsors we interviewed also protect the public interest in transit public-private partnership and other alternative approaches through the incorporation of private equity capital. When a private sector partner finances a project using equity capital, the private sector uses payments received from the project sponsor to repay its costs plus provide a return on investment. Since the private sector partner borrows to finance its costs—that is, it has equity at risk if it does not meet standards—it will be unable to meet its financial obligations from these milestone payments if those standards are not met. This situation can create incentives for the private sector partner to deliver according to the terms of the agreement. At the same time, financial advisors to project sponsors told us that bank lenders protect their investments by ensuring that the private sector properly develops a concession agreement and then delivers on it. The public interest is thus further protected by this integration of responsibilities because the bank lender and concessionaire provide additional project oversight through the monitoring of cost overruns and schedule delays, among other issues. According to the Canada Line private sector partner, it provided 17 percent equity in the project. For the Croydon Tramlink, the private sector partner contributed 30 percent of project costs. In the case of the Canada Line, the private sector partner did not miss any of its 400 payment milestones. To better protect the public interest, project sponsors have also incorporated clauses into project agreements that allow for flexibility under certain circumstances. Project sponsors that we interviewed noted the importance of having the ability to periodically revisit agreement terms in long-term concessions to protect the public interest given that unforeseen circumstances may occur that make the concessionaire unable to meet performance standards. For example, Houston Metro’s North and Southeast Corridor projects’ concession agreement incorporated this flexibility by including an operations and maintenance agreement for the first 5 years after service begins with the option for renewal. According to a consultant that works on the project, this approach was chosen in part because the project sponsor wanted an option to revisit the contract. Internationally, both of the London Underground’s maintenance 30-year concession agreements are reviewed for scope of work and costs by a public-private partnerships arbiter every 7.5 years. Moreover, the concessionaire has the ability to request an extraordinary review by the arbiter if costs rise above a specified threshold due to circumstances outside the private sector partner’s control. Periodically revisiting terms, or shorter concession periods, can also allow for changes such as system extension. One of the Docklands Light Railway extensions has breakpoints at the years 2013 and 2020 in its concession agreement that give the project sponsor an option to break and buy back the agreement for a set price. In contrast, in the previously mentioned example of Manchester Metrolink, concessions for phase 2 were terminated by the project sponsor to allow for system expansion in a third phase which was not procured as a public-private partnership. According to consultants we interviewed, the terminations could have been avoided if the initial concessions had been shorter. Shorter concession periods are thus being used as a means to revisit terms and rebid if desired. In addition to clauses that allow project sponsors to revisit concession agreement terms, other clauses that allow for flexibility can also protect the public interest. For example, Denver Regional Transportation District’s draft concession agreement includes clauses specifying both triggers that could lead to default and terms of compensation in case of default as well as termination provisions that detail the condition of the transit asset at the end of the concession when it is transferred back to the project sponsor. These provisions help to minimize disputes. Other advisors to project sponsors told us that a clause specifying the sharing of “refinancing gains” between the project sponsor and concessionaire could also help to protect the public interest. Refinancing gains refer to savings that occur when the private sector revises its repayment schedule for its equity investment by taking advantage of better financial terms. As we have noted in our report on highway public-private partnerships, the private sector can potentially benefit through gains achieved in refinancing their investments and these gains can be substantial. The governments of the United Kingdom as well as Victoria and New South Wales, Australia, require that any refinancing gains achieved by private concessionaires generally be shared with the government. Some foreign governments have recognized the importance of protecting the public interest in public-private partnerships through the use of quantitative and qualitative public interest assessments. We have also previously reported that more rigorous, up-front analysis could better secure potential benefits and protect the public interest. The use of quantitative and qualitative public interest tests and tools before entering into transit public-private partnerships can help lay out the expected benefits, costs, and risks of the project. Conversely, not using such tools can potentially allow aspects of the public interest to be overlooked. For example, a Value for Money analysis is a tool used to evaluate if entering into a project as a public-private partnership is the best project delivery option available. Internationally, the United Kingdom, and British Columbia in Canada, among others, require a Value for Money analysis for all transportation projects over a certain cost threshold. For example, all transportation projects in the United Kingdom that exceed about $24 million must undergo a Value for Money analysis to receive project funding, while projects in British Columbia must conduct a Value for Money analysis if project costs total more than about $46 million. Domestically, Florida requires a Value for Money analysis for public- private partnerships, one of which was recently conducted on the I-595 Corridor Roadway Improvements Project in Broward County. A Value for Money assessment was also completed for the Bay Area Rapid Transit’s Oakland Airport Connector at the request of FTA. In general, Value for Money evaluations examine total project costs and benefits and are used to determine if a public-private partnership approach is in the public interest for a given project. Value for Money tests are often done by comparing the costs of doing a proposed project as a public-private partnership against an estimate of the costs of procuring that project using a public delivery model. Value for Money tests examine not only the economic value of a project but also other factors that are hard to quantify, such as design quality and functionality, quality in construction, and the value of unquantifiable risks transferred to the private sector. In the United Kingdom, Value for Money analysis includes qualitative factors such as the viability, desirability, and achievability of the project in addition to the quantitative factors. Provinces such as Canada’s British Columbia and Australia’s Victoria also include qualitative factors in their financial assessments, including Value for Money analysis. Government officials stated that including both quantitative and qualitative factors in financial assessments such as Value for Money analysis provides a more comprehensive project assessment. In addition to determining whether a public-private partnership is advantageous over a publicly delivered project, project sponsors and government officials noted that a Value for Money analysis is also a useful management tool for considering up front all project costs and risks that can occur during a project’s lifetime, which is not always done in a conventional procurement. Project sponsors can also use financial assessments such as Value for Money analysis for other reasons. For example, Value for Money analysis can assist in determining which project delivery approach provides more value. Project sponsors can assess if one public-private partnership option is more advantageous than another if it is decided that private participation in a project is beneficial. For example, Bay Area Rapid Transit used a Value for Money analysis in its original iteration of the Oakland Airport Connector to assess which alternative project delivery approach (design-build-operate-maintain or design-build-finance-operate- maintain) would be more advantageous. Project sponsors can also use Value for Money to give a range of possible project costs when coupled with a sensitivity analysis. For example, a sensitivity analysis developed for the Canada Line suggested that project costs could have varied from $47 million more to $270 million less than expected, depending on the level of risk. A further example of how project sponsors can use Value for Money is to enhance communication about a project. Project sponsors noted that since Value for Money analyses are often publicly available, such as in the United Kingdom, they can lead to more-informed discussions and provide transparency in the selection of the project delivery approach. Thus, they can be good planning and communication tools for decision makers. Government officials and consultants that perform financial assessments, such as Value for Money analysis, cautioned that the assessments are not without limitations. For example, officials and consultants told us that these analyses are inherently subjective and rely on assumptions that can introduce bias. Assessments can include the assumption that the public sector will likely have higher construction costs due to a history of cost overruns. In the United Kingdom, an “optimism bias” of 15 percent is added to the public sector comparator in part to account for this. Consultants noted that there is subjectivity in valuing risks as detailed data on the probability of particular project risks occurring are unavailable. Thus consultants use data from past projects and their own professional views to conduct the analysis. In sum, government officials and consultants noted that Value for Money analysis should be considered as a tool rather than the sole factor in assessing whether to do a public-private partnership. Some countries have further protected the public interest in transit projects that use alternative approaches by establishing quasi- governmental entities to assist project sponsors in implementing these arrangements. Entities such as Partnerships UK, Partnerships Victoria, and Partnerships BC are often fee-for-service and associated with Treasury Departments on the provincial and national levels. These quasi- governmental entities all develop guidance such as standardized contracts and provide technical assistance to support transit projects that use alternative approaches. According to an advisor for project sponsors, contracts for these partnerships and approaches generally follow a standard model such as a framework for assigned risk between the project sponsor and private sector, with the particularities of local legislation and project specifics written into them. The United Kingdom’s standard contract outlines requirements as well as factors to consider from a project’s service commencement through termination, which is periodically updated to reflect lessons learned. For example, after the government of the United Kingdom required the private sector to share any refinancing gains with the project sponsor, the standard contract was subsequently updated. Furthermore, the quasi-governmental entities provide technical assistance to support transit projects that use alternative approaches. For example, Partnerships BC provides project sponsors assistance on conducting a Value for Money assessment to determine whether private sector participation in a project is beneficial. In addition to this assistance, these entities provide other varied services to facilitate public-private partnerships across different sectors. For example, Partnerships UK reviews project proposals for the government; Partnerships Victoria offers training for the province; and Partnerships BC advises project sponsors to help develop and close public-private partnership contracts in British Columbia. Quasi-governmental entities can further protect the public interest through the benefits they provide. According to government officials in the United Kingdom and Canada, these entities create a consistent approach to considering public-private partnerships, such as understanding a project’s main risks, which can reduce the time and costs incurred when negotiating a contract. Further, by using standardized contracts developed by these entities, project sponsors can reduce transaction costs—such as legal, financial, and administrative fees—of implementing transit projects that use alternative approaches. Moreover, project sponsors and consultants told us that entities like Partnerships UK and Partnerships BC can foster good public-private partnerships and help further protect the public interest by ensuring consistency in contracts and serving as a repository of institutional knowledge. Without the services provided by these quasi-governmental entities, project sponsors that plan to or use alternative approaches for a transit project will develop them on a case-by- case basis because they lack institutional knowledge and a centralized resource for assistance. While DOT has established an office to support project sponsors of highway-related public-private partnerships, DOT does not provide similar support for transit projects. In a previous GAO report, we noted that formal consideration and analysis of public interest issues had been conducted in U.S. highway public-private partnerships, and that DOT has done much to promote the benefits, but comparatively little to assist states and localities weigh potential costs and trade-offs of these partnerships. Since that report, the Federal Highway Administration’s Office of Innovative Program Delivery has been established to provide support for highway-related public-private partnerships by providing an easy, single- point of access for project sponsors and other stakeholders. The office is intended to offer outreach, professional capacity building, technical assistance, and decision-making tools for highway-related public-private partnerships. In addition, FTA officials told us that they have plans to develop an online toolset for employees to help them provide technical assistance to project sponsors on these alternative approaches. This assistance is to include checklists to help determine whether a project should use an alternative approach, risk matrices that provide an overview and explanation of risks transferred using such an approach, and a financial feasibility model that can be used to quantitatively compare the use of an alternative approach with the conventional approach to transit projects. Furthermore, in June 2009, the House of Representatives’ Committee on Transportation and Infrastructure’s surface transportation reauthorization blueprint proposed that an Office of Expedited Project Delivery be created within FTA to provide assistance to transit project sponsors much as we have outlined earlier in this report. However, such support is not currently available for project sponsors of transit projects that use alternative approaches. Project sponsors and their advisors noted that as there is little public sector institutional knowledge about public- private partnerships in the United States, projects may be carried out without the benefit of previous experiences. It is even more challenging to conduct transit projects that use alternative approaches in the United States given the variation in relevant state laws and local ordinances that project sponsors and other stakeholders must navigate. Furthermore, FTA’s New Starts evaluation requirements for transit projects seeking federal funding do not include an evaluation of whether the public is receiving the best value for its money as compared to other delivery approaches. Thus project sponsors, advisors, and government officials noted that such an entity in the United States could be valuable in further protecting the public interest in public-private partnerships. FTA distributes billions of dollars of federal funding to transit agencies for the construction of new, large-scale projects; as such, it is critical that the public interest is protected and federal funding is spent responsibly. Project sponsors are looking to transit projects that use alternative approaches to deliver and finance new transit projects, along with federal funds. However, because of its sequential and phased structure, FTA’s New Starts program is incompatible with transit projects that use these approaches. Congress recognized this concern when it authorized FTA to establish the Public-Private Partnership Pilot Program to illustrate how New Starts evaluation requirements can be streamlined to better accommodate the use of alternative approaches in transit projects. However, the pilot program has not yet illustrated how this can be done. This is because, on the one hand, FTA has determined that no pilot project has demonstrated enough of a transfer of risk—in particular a financial investment by the private sector—for FTA to consider granting major modifications to streamline its New Starts evaluation requirements. On the other hand, the potential challenges posed by the New Starts requirements, including delays and additional costs, may discourage the private sector from assuming enhanced financial responsibility in these alternative approaches. Despite this apparent impasse, FTA sill has the unique opportunity to take advantage of the fundamental characteristic of a pilot program— flexibility—to gain valuable insight on how to streamline the New Starts process to facilitate a greater private sector role in transit projects through the use of alternative approaches. FTA can introduce additional flexibility into its three pilot projects through, among other things, the use of existing, long-standing tools, such as Letters of Intent and Early Systems Work Agreements. Other agencies within DOT have used such tools successfully in the past to provide flexibility to their funding and approval processes and to advance and promulgate alternative project finance and delivery approaches. Moreover, some other countries have used conditional approvals to incorporate more flexibility into their funding processes and help encourage a greater private sector role in transit projects. FTA may want to turn to the experiences of these other modal administrations and governments and use existing, long-standing tools to incorporate more flexibility in the New Starts process to help facilitate transit projects that use alternative approaches. Without an evaluation plan to assess the results of its pilot program, FTA may also lose some valuable information Congress intended the agency to obtain through the pilot program’s establishment, including how the New Starts project approval process can be further streamlined. As more transit projects use alternative approaches, FTA may not be able to readily accommodate these approaches, ultimately disadvantaging transit project sponsors that seek to deliver their projects more quickly and efficiently and at a lesser cost to the public. In the past, DOT has done much to promote the potential benefits of transportation public-private partnerships. While these benefits are not assured and should be evaluated by weighing them against potential costs and trade-offs, DOT has done comparatively little to equip project sponsors to weigh the potential costs and trade-offs. Recently, DOT has taken a more integrated approach to a greater private sector role in transportation, as evidenced by its newly established Office of Innovative Program Delivery for public-private partnerships. Congress has taken a greater interest in facilitating alternative approaches as well. Quasi- governmental entities established by foreign governments have better equipped project sponsors to implement alternative approaches, including public-private partnerships, by creating a uniform method to considering the implications of alternative approaches, reducing transaction costs, ensuring consistency in contracts, and serving as a repository of institutional knowledge. FTA could consider these international models and expand its current efforts in transportation public-private partnerships to support a greater private sector role in transit directly to project sponsors. Expanded FTA efforts could facilitate the implementation of transit projects that use alternative approaches and protect the public interest through the use of tools such as standardized contracts, technical assistance, and financial assessments. To facilitate a better understanding of the potential benefits of alternative approaches in FTA’s Public-Private Partnership Pilot Program, if reauthorized, we recommend that the Secretary of Transportation direct the FTA Administrator to take the following actions: Incorporate greater flexibility, as warranted, in the Public-Private Partnership Pilot Program than has occurred to date by making greater use of existing tools such as Letters of Intent and Early Systems Work Agreements in order to streamline the New Starts process. Develop a sound evaluation plan for the Public-Private Partnership Pilot Program to accurately and reliably assess the pilot programs’ results that includes key factors such as: well-defined, clear, and measurable objectives; measures that are directly linked to the program objectives; criteria for determining pilot program performance; a way to isolate the effects of the pilot program; a data analysis plan for the evaluation design; and a detailed plan to ensure that data collection, entry, and storage are reliable and error-free. Beyond its pilot program, build upon efforts underway in DOT to better equip transit project sponsors in implementing transit projects that use alternative approaches, including developing guidance, providing technical assistance, and sponsoring greater use of financial assessments to consider the potential costs and trade-offs. We provided a draft of this report to DOT and FTA for review and comment. DOT has agreed to consider our recommendations and provided comments through e-mail from FTA officials. In their comments, FTA officials stated that the agency has ongoing and planned efforts as part of its Public-Private Partnership Pilot Program that they believe address the intent of our recommendations. For example, FTA officials noted that the agency has, as we reported, made use of tools such as Letters of Intent and Early Systems Work Agreements in the past in order to streamline the New Starts process, and that it will evaluate the potential for greater use of these existing tools in the future to incorporate greater flexibility into the pilot program. Additionally, FTA officials acknowledged the need for an evaluation plan to assess the pilot program’s results and stated they will be working to develop one. Further, FTA officials stated that FTA is working to develop technical assistance for its staff on how to structure and evaluate alternative approaches to transit projects; we revised our draft report to reflect FTA’s efforts. Because these efforts are either planned or in their early stages, we are retaining our recommendations. Finally, FTA officials provided technical comments, which we incorporated as appropriate. We are sending copies of this report to appropriate congressional committees and DOT. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at flemings@gao.gov or (202) 512-2834. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. Our work was focused on transit projects that involve greater private sector participation than is typical in conventional projects. In particular, we focused on (1) the role of the private sector in the delivering and financing of U.S. transit projects compared with other countries; (2) the benefits and limitations of and the barriers, if any, to greater private sector involvement in transit projects and how these barriers are addressed in the Department of Transportation’s (DOT) Public-Private Partnership Pilot Program; and (3) how project sponsors and DOT can protect the public interest in transit projects that use alternative approaches. Our scope was limited to identifying the primary issues associated with using public- private partnerships for transit infrastructure and not in conducting a detailed financial analysis of the specific arrangements. In order to clearly delineate alternative delivery and financing approaches used in transit, first we identified three categories—traditional, innovative, and alternative—that describe the evolution of such practices. We defined traditional financing to include federal grants (such as New Starts program grants), state and local public grants, taxes, and municipal bonds, and defined conventional project delivery to refer to the design-bid-build approach. We defined innovative financing to include loan or credit assistance such as the Transportation Infrastructure Financing and Innovation Act, Private Activity Bonds, Tax Increment Financing, State Infrastructure Banks, Grant Anticipation Notes, and Revenue Bonds, and innovative project delivery to refer to the design-build approach. Finally, we defined alternative financing to refer to public-private partnerships that involve private equity capital such as concession agreements and defined alternative approaches as ones that transfer greater risk to the private sector including: design-build, design-build-finance, design-build-operate- maintain, build-operate-maintain, design-build-finance-operate, design- build-finance-operate-maintain, build-operate-own, and build-own-operate, among others. We took several steps and considered various criteria in selecting which domestic transit projects to study as part of our review of alternative financing and project delivery practices. First, we reviewed transit project information from DOT, GAO, the Congressional Research Service, and other reports as well as conducted interviews with DOT officials, project sponsors, industry representatives, and academic experts to identify the potential universe of projects that fit at least one (alternative project delivery or alternative financing) or both of our established definitions. We also selected projects that were either completed or had already carried out substantial planning. The potential universe of projects contained 10 completed projects including: Denver Regional Transportation District Transportation Expansion Light Rail (design-build), South Florida Commuter Rail Upgrades (design-build), Minnesota Metro Transit Hiawatha Corridor Light Rail Transit (design-build), Bay Area Rapid Transit Extension to San Francisco International Airport (design-build), Washington Metropolitan Area Transit Authority Largo Metrorail Extension (design-build), Hudson-Bergen Light Rail Transit Minimum Operating Segment 1 (design-build-operate-maintain), Hudson-Bergen Light Rail Transit Minimum Operating Segment 2 (design-build-operate- maintain), John F. Kennedy Airtrain (design-build-operate-maintain), Portland MAX Airport Extension (design-build), and Las Vegas Monorail (design-build-finance-operate-maintain). We also included 3 ongoing transit projects as part of the universe: Bay Area Rapid Transit Oakland Airport Connector (design-build-operate-maintain), Denver Regional Transportation District East Corridor and Gold Line pilot projects (design- build-finance-operate-maintain), and Houston Metro North and Southeast Corridor pilot projects (design-build-operate-maintain). Second, we determined that we would focus solely on projects that have or are expected to go through the Federal Transit Administration’s (FTA) New Starts process given that this is the largest capital grant program for transit projects and that any such projects would be reviewed to protect the public interest (i.e., projects not entirely funded by the private sector). This eliminated the John F. Kennedy Airtrain, Portland MAX Airport Extension, and Las Vegas Monorail projects. Third, we applied three of four criteria from FTA’s Report to Congress to the remaining projects, including (1) project costs were reduced, (2) project duration was shortened, and (3) project quality was maintained or enhanced. This eliminated the South Florida Commuter Rail Upgrades, Hudson-Bergen Light Rail Transit Minimum Operating Segment 1 and Minimum Operating Segment 2, and the Bay Area Rapid Transit Extension to San Francisco International Airport. We decided to select all three of the ongoing pilot projects—Bay Area Rapid Transit Oakland Airport Connector, Denver Regional Transportation District East Corridor and Gold Line, and Houston Metro North and Southeast Corridors—given that FTA views these projects as currently having the most private sector potential and thus designated them as their three Public-Private Partnership Pilot Program projects. We also decided, given our limited resources, to select two of the remaining three completed projects—Minnesota Metro Transit Hiawatha Corridor and Denver Regional Transportation District Transportation Expansion—as DOT’s Report to Congress identified these two projects as having successful collaborations with their respective departments of transportation, including their highway counterparts, which have greater experience than transit in using alternative project delivery and alternative financing. This eliminated the Washington Metropolitan Area Transit Authority Largo Metrorail Extension. These projects were selected because they are recent examples of ongoing and completed transit projects in the United States that incorporated greater private sector involvement through the use of alternative project delivery or financing approaches or both. To select which international countries we would include as part of our review of alternative financing and project delivery practices, we conducted a literature review of international transit public-private partnerships as well as conducted interviews with DOT officials, project sponsors, industry representatives, and academic experts to identify the potential universe of countries with significant experience in transit public-private partnerships, including projects that fit at least one (alternative project delivery or alternative financing) or both of our established definitions. Second, we determined that we would collect the most valuable and relevant information from countries that share a similar political and cultural structure to the United States. Third, given our limited resources, we decided to select only two of the three remaining countries. Thus, we ultimately identified Canada and the United Kingdom for our international site visits. Issues discussed in the report related to the interpretation of foreign law, including the character of public-private partnership agreements, and their limitations, were evaluated as questions of fact based upon interviews and other supporting documentation. To determine how transit projects that use alternative approaches have been used in the United States, we collected and reviewed descriptions of the projects, copies of the concession or development agreements, planning documents, and documentation related to the financial structure of the projects in addition to academic, corporate, and government reports. We conducted, summarized, and analyzed in-depth interviews with project sponsors and private sector participants about their experiences with alternative financing and procurement in transit projects. We also reviewed pertinent federal legislation and regulations, including: Federal Register Notices and guidance for FTA’s Public-Private Partnership Pilot Program and the New Starts Program; DOT’s Report to Congress on the Costs, Benefits, and Efficiencies of Public-Private Partnerships for Fixed Guideway Capital Projects; and other DOT reports. To identify the potential benefits and potential limitations of transit projects that use alternative approaches, and what barriers project sponsors face in the United States, we conducted, summarized, and analyzed in-depth interviews with domestic project sponsors and private sector participants including private investors, financial and legal advisors, project managers, and contractors. In addition to these domestic experts, we conducted extensive interviews with various international stakeholders, experts, and private sector officials from Canada and the United Kingdom that were knowledgeable in greater private sector participation in the financing and procurement of transit projects. We also conducted a literature review; summarized and analyzed key benefits, limitations, and barriers to greater private sector participation; and interviewed FTA and other federal and local officials associated with the projects we selected as well as private sector officials involved with United States transit public-private partnership arrangements. To determine how project sponsors and DOT can protect the public interest in transit projects that use alternative approaches, we conducted site visits of selected transit public-private partnerships and visited the United Kingdom and Canada, which both had more experience conducting transit public-private partnerships. We conducted, summarized, and analyzed in-depth interviews with project sponsors, private sector participants, international stakeholders, and experts regarding the competitive procurement process, robust concession agreements, and Value for Money analyses, among other topics. We also examined international mechanisms that were implemented for projects including Croydon Tramlink, Docklands Light Railway, London Underground, Manchester Metrolink, and Nottingham Express Transit in the United Kingdom and the Canada Line in Vancouver, Canada, to provide insight on how project sponsors and DOT can protect the public interest in transit projects that use alternative approaches. We also held in-depth interviews with FTA on its steps to protect the public interest in federally funded transit projects with greater private sector participation including programs like FTA’s Public-Private Partnership Pilot Program and the New Starts Program. We conducted this performance audit from October 2008 through October 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Steve Cohen, Assistant Director; Jay Cherlow; Patrick Dudley; Carol Henn; Bert Japikse; Joanie Lofgren; Maureen Luna-Long; Amanda K. Miller; Tina Paek; Amy Rosewarne; Tina Won Sherman; and Jim Wozny made key contributions to this report. Equal Employment Opportunity: Pilot Projects Could Help Test Solutions to Long-standing Concerns with the EEO Complaint Process. GAO-09-712. Washington, D.C.: August 12, 2009. Public Transportation: Better Data Needed to Assess Length of New Starts Process, and Options Exist to Expedite Project Development. GAO-09-784. Washington, D.C.: August 6, 2009. Public Transportation: New Starts Program Challenges and Preliminary Observations on Expediting Project Development. GAO-09-763T. Washington, D.C.: June 3, 2009. High Speed Passenger Rail: Future Development Will Depend on Addressing Financial and Other Challenges and Establishing a Clear Federal Role. GAO-09-317. Washington, D.C.: March 19, 2009. Highway Public-Private Partnerships: More Rigorous Up-Front Analysis Could Better Secure Potential Benefits and Protect the Public Interest. GAO-08-1149R. Washington, D.C.: September 8, 2008. Public Transportation: Improvements Are Needed to More Fully Assess Predicted Impacts of New Starts Projects. GAO-08-844. Washington, D.C.: July 25, 2008. Highway Public-Private Partnerships: Securing Potential Benefits and Protecting the Public Interest Could Result from More Rigorous Up-front Analysis. GAO-08-1052T. Washington, D.C.: July 24, 2008. Highway Public-Private Partnerships: More Rigorous Up-front Analysis Could Better Secure Potential Benefits and Protect the Public Interest. GAO-08-44. Washington, D.C.: February 8, 2008. Federal-Aid Highways: Increased Reliance on Contractors Can Pose Oversight Challenges for Federal and State Officials. GAO-08-198. Washington, D.C.: January 8, 2008. Railroad Bridges and Tunnels: Federal Role in Providing Safety Oversight and Freight Infrastructure Investment Could Be Better Targeted. GAO-07-770. Washington, D.C.: August 6, 2007. Public Transportation: Future Demand Is Likely for New Starts and Small Starts Programs, but Improvements Needed to the Small Starts Application Process. GAO-07-917. Washington, D.C.: July 27, 2007. Public Transportation: Preliminary Analysis of Changes to and Trends in FTA’s New Starts and Small Starts Programs. GAO-07-812T. Washington, D.C.: May 10, 2007. Public Transportation: New Starts Program Is in a Period of Transition. GAO-06-819. Washington, D.C.: August 30, 2006. Public Transportation: Preliminary Information on FTA’s Implementation of SAFETEA-LU Changes. GAO-06-910T. Washington, D.C.: June 27, 2006. Equal Employment Opportunity: DOD’s EEO Pilot Program Under Way, but Improvements Needed to DOD’s Evaluation Plan. GAO-06-538. Washington, D.C.: May 5, 2006. Highways and Transit: Private Sector Sponsorship of and Investment in Major Projects Has Been Limited. GAO-04-419. Washington, D.C.: March 25, 2004.
As demand for transit and competition for available federal funding increases, transit project sponsors are increasingly looking to alternative approaches, such as public-private partnerships, to deliver and finance new, large-scale public transit projects more quickly and at reduced costs. GAO reviewed (1) the role of the private sector in U.S. public transit projects as compared to international projects; (2) the benefits and limitations of and barriers, if any, to greater private sector involvement in transit projects and how these barriers are addressed in the Department of Transportation's (DOT) pilot program; and (3) how project sponsors and DOT can protect the public interest when these approaches are used. GAO reviewed regulations, studies, and contracts and interviewed U.S., Canadian, and United Kingdom officials (identified by experts in the use of these approaches). In the United States, the private sector role in delivering and financing transit projects through alternative approaches, such as public-private partnerships, has been more limited than in international projects. The private sector role in U.S. projects has focused more on how they are delivered rather than how they are financed, while the private sector role in international projects has focused on both project delivery and financing. Since 2000, seven new large- scale construction projects funded through FTA's Fixed Guideway Capital Investment Program--New Starts program--have been completed using one of two alternative project delivery approaches, and none of these projects included private sector financing. In 2005, Congress authorized FTA to establish a pilot program to demonstrate the advantages and disadvantages of these alternative approaches and how the New Starts Program could better allow for them. Alternative approaches can offer potential benefits such as a greater likelihood of completing projects on time and on budget, but also involve limitations such as less project sponsor control over operations. The sequential and phased New Starts process is a barrier because it is incompatible with alternative approaches and thus does not allow for work to be completed concurrently, which can lead to delays and increased costs. Under its pilot program, FTA can grant major streamlining modifications to the New Starts process for up to three project sponsors, but has not yet granted any such modifications because FTA has found that none of the projects has transferred enough risk, in particular financial responsibilities, to the private sector. FTA has the ability within its pilot program to further experiment with the use of long-standing existing tools that could encourage a greater private sector role while continuing to balance the need to protect the public interest. This includes forms of conditional funding approvals used by other DOT agencies and international governments. FTA also lacks an evaluation plan to accurately and reliably assess the pilot program's results, including the effect of its efforts to streamline the New Starts process for pilot project sponsors. Without such a plan, agencies and Congress will be limited in their decision making regarding the pilot program. Transit project sponsors protect the public interest in alternative approaches through, for example, the use of performance standards and financial assessments to evaluate the costs and benefits of proposed approaches. Other governments have established entities to assist project sponsors in protecting the public interest. These entities have better equipped project sponsors to implement alternative approaches by creating a uniform approach to developing project agreements and serving as a repository of institutional knowledge. DOT can serve as a valuable resource for transit project sponsors by broadening its current efforts, including providing technical assistance and encouraging the use of additional financial assessments, among other measures.
The Postal Reorganization Act of 1970 (P.L. 91-375) created the United States Postal Service, an independent, self-supporting organization, replacing the former United States Post Office Department. The act charges the Postal Service with binding the nation together through the personal, educational, literary, and business correspondence of the people and providing reliable and efficient mail services to all areas of the country. The Postal Service is intended to be self-supporting from postal operations and is mandated to break even over time. With nearly 800,000 career employees, the Postal Service is the second largest employer compared with U.S. private sector organizations. It has an extensive infrastructure, consisting of more than 38,000 post offices, branches, and stations; 240,000 delivery routes to over 137 million delivery addresses; a fleet of 215,000 vehicles; 350 major processing and distribution facilities; and nearly 800,000 career employees. The Postal Service is now facing a financial crisis brought about by declining revenues and growing operating expenses and capital needs, including the cost of existing and new investments in information technology (IT). In February 2002, we reported significant declines in the Postal Service’s net income from fiscal year 1995 to fiscal year 2001 and a net loss of $1.68 billion in fiscal year 2001 alone, resulting in part from declining mail volumes and from terrorist incidents. In April 2001, we placed the Postal Service’s transformational efforts and long-term outlook on our High-Risk list, noting that the Postal Service is at growing risk of not being able to continue its mission of providing the current level of universal service throughout the nation while maintaining reasonable rates and remaining largely self-supporting through postal revenues. The Postal Service has acknowledged the need for a new business model in light of these events and various trends now shaping the delivery services marketplace, such as consumer interest in new service types and increasing security concerns. Other increases in the cost of doing business, such as the rising costs of retirement and health benefits, heighten the need for action. To conserve cash and limit debt, the Postal Service has continued its freeze on capital spending for most facility projects, and its total budgeted capital outlays have declined in fiscal year 2002 for the third consecutive year to $2.2 billion. The Postal Service has reported that it plans to respond to these trends by providing customers with added value, improving the efficiency of operations, containing costs, fostering a performance-based culture, and improving its management of enabling functions such as financial management, purchasing, and IT. The Postal Service has established the specific goal of connecting all of its components through IT, to enable it to enhance security, add valuable product features, and manage its operations in real time. The Postal Service accounts for its expenditures in separate expense and capital accounts, according to Generally Accepted Accounting Principles to which public financial reporting by U.S. corporations must conform. Expenditures categorized as “expense” generally comprise operating costs and are primarily funded through a general operating budget. Expenditures categorized as “capital” are for one-time costs, are project-specific, and are depreciated. The Postal Reorganization Act vested direction of the Postal Service in an eleven-member Board of Governors, including nine appointed by the President. The nine governors appoint the Postmaster General, who is the Chief Executive Officer, and who, with the nine governors, appoints the Deputy Postmaster General. The Postal Service’s executive vice presidents are the Chief Operating Officer and the Chief Financial Officer. The Postal Service has senior vice presidents for Government Relations and Public Policy, Human Resources, Operations, Office of the Chief Marketing Officer, and Office of the Chief Technology Officer. Figure 2 shows an overview of the Postal Service’s current organizational structure. The Postal Service has come to rely increasingly on IT. In the early 1980s, it used data centers and mainframe computers to support administrative functions such as personnel, accounting, and payroll processing. In the mid-1980s, the Postal Service began to incorporate IT into its core business activities by interconnecting various components of its mail processing system through telecommunications and automation. Today, the organization relies on IT throughout the full range of its operations and management processes to run the machines that process and sort mail, assign mail efficiently to alternative surface and air carriers, support point- of-service terminals, collect and analyze inventory and sales information, process payroll and other accounts payable, and perform other activities. Communication networks also play a vital role in linking together various elements of the Postal Service’s infrastructure and transmitting information to various locations for storage, processing, and analysis. The Postal Service expended approximately $700 million for IT in fiscal year 2002 and plans to spend about $1 billion for IT in fiscal year 2003. The Postal Service currently manages almost 650 IT systems and applications that operate in support of postal functions. It has 24 IT-related projects in development or recently completed, each estimated to cost at least $10 million. The total investment cost estimated for these projects since 1997 is more than $2 billion, ranging from about $10 million to about $404 million per project. (See app. I for a list of the Postal Service’s IT- related projects currently in progress.) Projects with major IT components in development or implementation phases include the following: Point of Service ONE—A retail point-of-sale information system that is intended to replace outdated retail terminals at postal retail windows and provide more timely and accurate information. Associate Office Infrastructure—Expected to support a common information system for retail, delivery, and administrative operations in post offices. Delivery Operations Information System—Scheduled to replace three current information systems and assist delivery unit supervisors in managing office activities, planning street activities, and managing route inspection and adjustment activities. Time and Attendance Collection System—Expected to replace five existing time and attendance systems and enable labor resources to be more efficiently allocated by providing supervisors with accurate, real-time labor data by type of work being performed. Advanced Computing Environment—A major infrastructure modernization initiative that is expected to replace existing workstations and transitions applications to a Web-based environment. Given the challenges the Postal Service currently faces, effective management of its existing and new IT investments is crucial if it is to provide the service expected while remaining self-supporting. However, recent reviews, performed by the Postal Service’s Office of Inspector General (OIG) and by us, have raised some concerns regarding the Service’s investment management. The OIG has identified weaknesses in the management of some investments in recent years. For example, in September 2001, the OIG reported that projects have been proposed to the Board of Governors for approval without adequate documentation and analyses and that other projects may not achieve anticipated performance and financial results.In March 2001, the OIG’s review of the Delivery Operations Information System found weaknesses in the methods and assumptions that were used to derive figures on estimated savings and return on investment. In September 1999, the OIG found that Point of Service ONE was not achieving the results outlined in its business case. The Postal Service has made enhancements to its investment policies and procedures to address the issues the OIG raised. In September 2000, we identified a number of issues with the management of the Postal Service’s e-commerce program, including inconsistencies in reviewing and approving e-commerce initiatives and deficiencies in the financial data reported. We made several recommendations to the Postal Service that addressed these issues. This program was subsequently scaled back by the Postal Service, as both revenues and customer response fell below expectations. Several individuals and oversight boards are involved in managing IT investments, from reviewing and approving a proposed IT project, through the process of budgeting for it and monitoring it once it is implemented, and evaluating it at its conclusion. These individuals and oversight boards and their roles are described below. Board of Governors—Eleven-member board that governs the Postal Service; comprises the Postmaster General, the Deputy Postmaster General, and nine Presidential appointees; expected to approve any project with capital and “expense investment” costs of $10 million or more. Capital Projects Committee (CPC)—Three members of the Board of Governors who are to review proposals for any new project with capital and expense investment costs of $10 million or more and make recommendations to the full Board on whether to approve it. Postmaster General—Chief Executive Officer of the Postal Service and a member of the Board of Governors; expected to approve or review any project with capital and expense investment costs of $7.5 million or more. Establish Team—Comprises the Deputy Postmaster General, the Chief Financial Officer, the Chief Operating Officer, the Chief Marketing Officer, the Senior Vice Presidents of Operations and Human Resources, the Controller, and a field vice president; is to set financial and nonfinancial goals for the Postal Service at the start of its annual planning and budgeting process and determine funding for existing and proposed IT projects as part of the budget formulation process. Capital Investment Committee (CIC)—Comprises the Chief Technology Officer (CTO) and other senior executives. Is to review proposals for any project with capital and expense investment costs of $7.5 million or more. Deploy Team—Comprises several vice presidents; with the Establish Team, is to determine funding for IT projects as part of the Postal Service’s annual planning and budgeting process. Vice President of Finance (Controller)—Is to review and validate proposals for any project with capital and expense investment costs of $5 million or more. Capital and Program Evaluation (CAPE)—Group within the Finance Department under the Controller. During the review process for new projects, is expected to validate the assumptions and cost, benefit, and schedule estimates; prepare the Postal Service’s 5-year Capital Investment Plan (CIP); monitor projects with capital and expense investment costs of $5 million or more; and perform cost studies of selected completed projects. Chief Technology Officer (CTO) Organization—Comprises the Office of the CTO and the Information Technology Department headed by the Chief Information Officer (CIO). The CTO organization assists other functional units in developing business cases for projects that have an IT component. It is also involved in the project concurrence process, where feedback on a project is given to the sponsoring organization by functional areas and relevant field units. The CTO organization is also responsible for developing systems standards and requirements for organizationwide compliance. At the strategic level, the CTO and CIO recommend and present corporatewide IT projects before the Establish Team during the annual capital planning cycle. CTO Investment Review Board—Three-member board comprises the CTO, CIO, and Manager of IT Value; is to manage the process of selecting projects within the CTO organization, review the performance of all IT projects in development, and conduct detailed reviews of selected IT projects on a monthly basis. The Postal Service has established a number of capital planning, investment control, and budgeting processes to manage its IT investments. These include processes for (1) developing the investment portfolio, (2) approving major new projects, and (3) controlling and evaluating projects. The Postal Service’s annual capital planning and budgeting cycle begins in January with a process called the CustomerPerfect! management cycle. The Establish Team and the Deploy Team, composed of Postal Service executives, manage this annual organizationwide direction-setting process, led by Operations and aided by the Budget and Financial Analysis (B&FA) and the Capital and Program Evaluation (CAPE) groups within the Finance Department. The Establish Team is expected to align the organization’s targets and goals with its commitment to listen to the three “voices” that represent aspects of its mission: the Voice of the Business (financial benefits), the Voice of the Customer (customer satisfaction), and the Voice of the Employee (employee satisfaction). The Establish Team is to review project and program funding requests and make preliminary selection and funding decisions on the basis of how the requests fit the organization’s mission and budget. This process sets the Postal Service’s financial and nonfinancial goals for the year. Figure 3 provides detail on the Postal Service’s capital planning and budgeting cycle. The process for approving major new IT projects is the same as for any other new projects with capital costs of $5 million or more. These major projects are to proceed through the formal approval process and are monitored by the Finance Department in conjunction with the program sponsors when they are in development and implementation phases. The process for approving proposed capital investments is defined in the Postal Service’s F-66 manual. The process begins with the sponsoring unit preparing a Decision Analysis Report (DAR), which presents the business case for the proposed project. Figure 4 provides detail on the process for approving major new projects. During a capital project’s life cycle, control and evaluation are accomplished through two processes. Project sponsors are to produce quarterly compliance reports that summarize the project’s status. These reports are to be used by CAPE, along with other financial information, to produce the quarterly Investment Highlights that are distributed to the Board of Governors and others to present the status of Board-approved projects. This project oversight process continues for 18 months beyond a project’s initial implementation. The Program Performance Group, part of CAPE, studies selected projects that are still in development to determine whether they remain on track to achieve cost goals. The Program Performance Group may also conduct cost studies, after implementation, to determine whether cost goals have been met. Changes in scope, schedule, or total capital funding needed for a project trigger the requirement for a modified DAR, which must be reviewed and approved through the same process as the original DAR. At the operational level, the CTO organization’s project managers and portfolio managers conduct the day-to-day oversight of IT projects, including those sponsored outside of the CTO organization, by tracking performance of IT projects in the Program Tracking and Reporting System (PTRS) and reporting project status every month to the CTO Investment Review Board. When problems are identified, they are addressed through interaction with the sponsoring organization, which may choose to bring the issue to senior executives if the problem is likely to affect their ability to meet their objectives. IT investments that are not funded by capital funds are controlled and evaluated through the annual budget process. Executive-level oversight is performed through annual reviews of program descriptions called “program narratives,” which provide input to the budget decision. At the operational level, ongoing oversight is performed through routine tracking of system operation. Figure 5 shows the Postal Service’s project control and evaluation process. Based on research into the IT investment management practices of leading private- and public-sector organizations, we have developed an information technology investment management maturity (ITIM) framework. This framework identifies critical processes for successful IT investment organized into a framework of five increasingly mature stages. The ITIM is intended to be used both as a management tool for implementing these processes incrementally and as an evaluation tool for determining an organization’s current level of maturity. The overriding purpose of the framework is to encourage investment processes that increase business value and mission performance, reduce risk, and increase accountability and transparency in the decision process. This framework has been used in several GAO evaluationsand has been adopted by a number of agencies. These agencies have used ITIM for purposes ranging from self-assessment to redesign of their IT investment management processes. ITIM is a hierarchical model comprising five “maturity stages.” These maturity stages represent steps toward achieving stable and mature processes for managing IT investments. Each stage builds upon the lower stages; the successful achievement of each stage leads to improvement in the organization’s ability to manage its investments. With the exception of the first stage, each maturity stage is composed of “critical processes” that must be implemented and institutionalized for the organization to achieve that stage. These critical processes are further broken down into key practices that describe the types of activities an organization should be performing to successfully implement each critical process. An organization may be performing key practices from more than one maturity stage at one time. This is not unusual, but efforts to improve investment management capabilities should focus on becoming compliant with lower stage practices before addressing higher stage practices. Stage two in the ITIM framework encompasses building a sound investment management process—by developing the capability to control projects so they finish predictably within established cost and schedule expectations—and establishing basic capabilities for selecting new IT projects. Stage three requires that an organization continually assess proposed and ongoing projects as parts of a complete investment portfolio: an integrated and competing set of investment options. This approach enables the organization to consider the relative cost, benefit, and risk of newly proposed investments along with those previously funded and to identify the optimal mix of IT investments to meet its mission, strategies, and goals. Stages four and five require the use of evaluation techniques to continuously improve both the investment portfolio and investment processes to better achieve strategic outcomes. Figure 6 shows the five maturity stages and the associated critical processes. As defined by the model, each critical process consists of “core elements” that indicate whether the implementation and institutionalization of a process can be effective and repeated. Key practices must be executed to fulfill the core elements and implement the critical process. The core elements are as follows: Organizational commitments—Actions taken by management to ensure that the critical process is established and will endure. Key practices typically involve establishing organizational policies and engaging the sponsorship of senior management. Prerequisites—Conditions that must exist within an organization to enable it to successfully implement a critical process. Key practices typically involve allocating resources, establishing organizational structures, and providing training. Activities—Actions that must be taken to implement a critical process. An activity occurs over time and has recognizable results. Key practices typically involve establishing procedures, performing and tracking work, and taking corrective actions as necessary. The objective of our review was to assess the Postal Service’s capabilities for effectively managing its IT investments. To determine these capabilities and the organization’s level of maturity in managing its IT investments, we applied our ITIM framework and the associated assessment method. As a part of the ITIM assessment method, we obtained documentary and testimonial evidence and observed demonstrations of several internal systems showing the organization’s execution of various key practices. We evaluated the Postal Service against 14 critical processes in maturity stages two, three, four, and five. We did not evaluate the Postal Service on key practices for one critical process in stage three—Authority Alignment of IT Investment Boards—because major IT capital investments are managed by the same oversight entities, and we determined that this critical process was not applicable. To determine whether the Postal Service had implemented the 14 critical processes we assessed, we first reviewed documentation relating to the organization’s IT investment management practices, including written policies, procedures, and guidance that it had developed, and other forms of documentation that provided evidence that these practices had been executed. Documents included the Postal Service’s F-66 manual, Investment Highlights reports, executive memoranda, program narratives required for the annual budget formulation, DARs, performance indicators, and the minutes from meetings of the CIC, the CPC, and the Board of Governors. We also reviewed a variety of administrative and system documents from the CTO organization, including evidence of its formulation process for IT investment proposals and its oversight process for IT investments. We interviewed a number of senior officials, including the Chief Financial Officer (CFO), the CTO, and the CIO. Within the Office of the CFO, we also spoke with the Manager of Capital and Program Evaluation and the Manager of Corporate Budget. Within the Office of the CTO, we interviewed the Manager of IT Value and a representative from the Enterprise Architecture Office. We also spoke with senior officials from the functional units, such as the Manager of Logistics Systems, the Manager for Human Resources Technology Management, and the Manager of Customer Service Operations. As part of the analysis, we selected four projects, representing a range of functional units, stages of development, and sizes, and examined them to determine the extent to which the Postal Service’s policies and procedures for IT investment management were being implemented. The projects we selected for review were (1) Enhanced Security Capability, (2) Organization Structure, Staffing and Management, (3) Point of Service ONE, and (4) Surface-Air Management System. Appendix II contains additional information on each of the projects we reviewed. To perform the project reviews, we reviewed project management documentation such as DARs, project management plans, and PTRS reports. To clarify information in these documents and gain further insight we also interviewed managers in the sponsoring functional units, project managers, and the members of the project management teams. The teams included staff who had been assigned responsibility for project oversight within the Office of the Chief Technology Officer. We compared the evidence we collected through document reviews and interviews to the detailed requirements for each key practice and critical process that is specified in the ITIM. In accordance with the ITIM assessment method, we considered a key practice to have been “executed” when we determined, by team consensus, that sufficient evidence existed to confirm that the Postal Service was executing the practice in accordance with stated ITIM criteria. When we determined that there were significant weaknesses in the Postal Service’s execution of a practice or found insufficient evidence of its execution, we concluded that the practice was not executed. Once the key practices were assessed, we determined which of the 14 critical processes had been implemented. A critical process was determined to be “implemented” when all related key practices were designated as executed. Otherwise, according to the ITIM assessment method, the critical process would not be considered to have been implemented. We conducted our work at the Postal Service’s headquarters offices in Washington, D.C., from October 2001 through July 2002, in accordance with generally accepted government auditing standards. At the stage two level of maturity in the IT investment management framework, an organization has attained repeatable, basic selection and control processes and successful IT investment control processes at the project level. In other words, the organization can select projects that meet established selection criteria and can identify expectation gaps early and take appropriate steps to address them. According to ITIM, critical processes at this stage include (1) defining investment review board operations, (2) developing processes to determine the progress of individual IT projects, (3) creating an inventory of IT investments, (4) identifying IT project and systems business needs, and (5) developing a basic process for selecting new IT proposals. Table 1 shows the purpose of each critical process in stage two. The Postal Service is executing nearly 90 percent of the key practices associated with stage two critical processes. Specifically, the Postal Service is carrying out all of the key practices associated with selecting proposals that meet established criteria, aligning IT projects with the organization’s business needs, and maintaining information on IT projects and systems in an inventory. The Postal Service has yet to execute a few key practices associated with establishing an IT investment management foundation. For example, the Postal Service does not have guidance defining the overall framework for its IT investment management process, and policies and procedures for project oversight are not documented. When the Postal Service implements the remaining critical processes associated with stage two, it will acquire the additional key controls needed to fully implement basic control processes. For example, with an investment management process guide, the Postal Service will gain assurance that IT investment activities will be performed in a consistent and cost-effective manner. Table 2 summarizes the status of the Postal Service’s critical processes for stage two, showing how many associated key practices it has executed. The following discussion provides information on steps the Postal Service has taken to implement each of these critical processes. The creation of decision-making bodies or boards is central to the IT investment management process. At the stage two level of maturity, organizations define one or more boards, provide resources to support their operations, and appoint members who have expertise in both operational and technical aspects of proposed investments. Resources provided to support the operations of IT investment boards typically include top management’s participation in creating the board(s) and defining their scope and formal evidence acknowledging management’s support for board decisions. The boards operate according to a written IT investment process guide tailored to the organization’s unique characteristics, thus ensuring that consistent and effective management practices are implemented across the organization.Once board members are selected, the organization ensures that they are knowledgeable about policies and procedures for managing investments. Organizations at the stage two level of maturity also take steps to ensure that executives and line managers support and carry out the decisions of the IT investment board. According to ITIM, an IT investment management process guide should be a key authoritative document that the organization uses to initiate and manage IT investment processes and should provide a comprehensive foundation for policies and procedures developed for all other related processes. The Postal Service has executed four of the six key practices for this critical process by establishing investment boards; providing adequate resources for related activities; appointing experienced senior-level executives to the boards; and implementing policies, procedures, and processes to ensure that executives and line managers support and carry out decisions made by the boards. However, the Postal Service has yet to develop a written, organization- specific process guide to direct the operations of the investment boards. While the F-66 manual provides general guidance on the organization’s investment management process, it does not constitute an IT investment process guide because it does not sufficiently define the investment process. Specifically, the manual does not include information on the roles of the Establish Team and the CTO Investment Review Board. In addition, it does not provide detail on the processes followed by other boards involved in the investment management process (e.g., the CIC and CPC). Finally, the manual does not identify the manner in which investment boards’ processes are to be coordinated with other key organizational plans and processes (such as the budget formulation process). Without an investment management process guide, the Postal Service lacks the assurance that IT investment activities will be coordinated and performed in a consistent and cost-effective manner. Table 3 shows the rating for each key practice required to implement the critical process for establishing IT investment board operation at the stage two level of maturity. Each of the “executed” ratings shown below represents an instance where, based on the evidence provided by Postal Service officials, we concluded that a specific key practice was currently being executed by the organization. Investment boards should effectively oversee IT projects throughout all life-cycle phases (concept, design, testing, implementation, and operations/maintenance). At the stage two level of maturity, investment boards should review each project’s progress toward predefined cost and schedule expectations, using established criteria and performance measures, and should take corrective actions to address cost and milestone variances. According to ITIM, effective project oversight requires, among other things, (1) having written polices and procedures for project management; (2) developing and maintaining an approved management plan for each IT project; (3) making up-to-date cost and schedule data for each project available to the oversight boards; (4) reviewing each project’s performance by regularly comparing actual cost and schedule data with expectations; (5) ensuring that corrective actions for each under-performing project are documented, agreed to, implemented, and tracked until the desired outcome is achieved; and (6) having written policies and procedures for oversight of IT projects. The Postal Service has executed most of the key practices in the area of project oversight. For example, the Postal Service has developed several policies and procedures for project management, including the Program Management Process Guidelines, which are high-level project management guidelines used for all projects; the more-detailed Software Process Standards and Procedures used by the Postal Service’s business solution centers to develop and maintain systems; and the recently-issued Integrated Solutions Methodology, which provides a process for managing a system’s development throughout the life-cycle phases. In addition, IT projects have an approved, up-to-date project management plan, in accordance with project management guidelines. Data on a project’s actual cost and schedule are provided to the CTO Investment Review Board,which is responsible for overseeing the performance of IT projects, and to other oversight groups as appropriate. Actual cost and schedule data for the four projects we reviewed were provided to (1) the CTO Investment Review Board in the form of PTRS reports, (2) the Board of Governors through quarterly Investment Highlights reports featuring capital expenditures and schedule data, and (3) field and headquarters offices through accounting and management reports featuring data on projects’ actual capital and expense costs. Finally, the CTO Investment Review Board regularly oversees the performance of projects by comparing actual cost and schedule data to expectations and performs special reviews of projects that do not meet expectations. When these reviews are performed, corrective actions are defined, documented, agreed to by the program manager and the CTO Investment Review Board, and tracked until the desired outcome is achieved. According to the IT program manager for Organization Structure, Staffing and Management (OSS&M), special meetings were held for this project to address schedule performance issues. Also, officials from the CTO organization stated that the office generates reports listing projects that are not meeting cost, schedule, or customer satisfaction expectations and brings them to management’s attention so that “special reviews” can be performed. These reports identify the manager and group responsible for the project, provide a summary of the problem, the status of the resolution, and a target date for resolving the problem. The CTO Investment Review Board tracks action items to resolve the problem until they are completed. Notwithstanding these strengths, the Postal Service has a few weaknesses in its oversight of IT projects. First, while the Postal Service has written policies and procedures addressing how the CTO Investment Review Board is to oversee IT investments, it does not have any that sufficiently define the Establish Team’s role in the oversight process. The F-66 manual, for example, notes that senior management is to continually review the performance of capital projects and discusses some mechanisms that could be used for this purpose (e.g., compliance reports). However, it does not provide specifics on the role of the Establish Team or define processes for oversight of projects beyond the initial deployment phase. Without adequate policies and procedures, project oversight may not be performed consistently. In addition, without these policies and procedures, the Postal Service lacks the transparency that is helpful in both communicating and demonstrating how project oversight is performed. Second, the Postal Service’s investment boards do not adequately oversee project performance by comparing actual cost data to expectations. Specifically, while the Establish Team and CTO Investment Review Board each compare actual cost data to annual budget expectations, the Postal Service could not demonstrate that these boards compared the data to original expectations established in the DAR. In addition, while the Investment Highlights used by executives to monitor project performance contains schedule information, it does not contain complete information on actual project costs because it does not report operating expenses. Without comparisons of complete actual cost data to original expectations, Postal Service executives may not be able to easily determine whether the projects they have selected are progressing as planned or whether corrective actions are needed. Table 4 shows the rating for each key practice required to implement the critical process for project oversight at the stage two level of maturity and summarizes the evidence that supports these ratings. To make good management decisions, an organization must know how funds are being expended toward acquiring, maintaining, and deploying its IT investments. Implementing this critical process requires an organization to identify all projects and systems within the organization and create one or more repositories or inventories of information about them. This information is required to track the organization’s IT resources to provide a basis for analyses showing major cost and management factors and trends. An IT project and systems inventory can take many forms and does not have to be centrally located or consolidated. The guiding principles for developing the inventory are that the information maintained should be accessible where it is of the most value to investment decision makers and relevant to the management processes and decisions that are being made. According to ITIM, organizations at the stage two level of maturity provide adequate resources for tracking IT projects and systems, designate responsibility for managing the project and system identification process, and develop related written policies and procedures. Resources required for this purpose typically include managerial attention to the process; staff; supporting tools; an inventory database; inventory reporting, updating and query tools; and a method for communicating inventory changes to affected parties. Stage two organizations develop and maintain information on their IT projects and systems in one or more inventories according to written procedures, recording changes in data as required, and maintaining historical records. Access to this information is provided on demand to decision makers and other affected parties. The Postal Service has executed all of the key practices for this critical process. The Service has established a number of repositories of information on its IT projects and systems in the form of the Enterprise Information Repository (EIR), and automated systems such as PTRS that track actual cost, schedule, benefit, and risk associated with the Postal Service’s IT programs and projects. Members of the Postal Service’s investment boards have access to the systems used to maintain information on the organization’s IT programs and projects. Information is maintained in these databases because they are also used for other purposes. For example, project managers input up-to-date systems and project status information to PTRS; the Corporate Planning System (CPS) and PTRS are updated automatically as financial transactions are processed. Finally, the Postal Service retains records showing changes in the information maintained on each IT investment over time and provides these records to its investment boards. Table 5 shows the rating for each key practice required to implement the critical process for IT project and system identification at the stage two level of maturity and summarizes the evidence that supports these ratings. Defining business needs for IT projects helps ensure that projects support the organization’s mission goals and meet users’ needs. This critical process creates the link between the organization’s business objectives and its IT management strategy. According to ITIM, effectively identifying business needs requires, among other things, (1) developing policies and procedures for identifying business needs and associated users for IT projects, (2) defining the organization’s business needs or stated mission goals, (3) defining business needs for projects, and (4) identifying users for projects who will participate in the project’s development and implementation. The Postal Service has executed all of the key practices for this critical process. The Service’s business needs are defined in a number of documents, including the organization’s strategic plan and recent Transformation Plan. Business needs and project users are being identified and defined in accordance with policies and procedures, and users are involved in project management throughout a project’s life cycle. For example, the project management team for Point of Service ONE conducts interviews to ensure that the system is providing the information needed for decision making, and staff working in field locations tested Point of Service ONE software to provide input on modifications required to support their needs. The business needs and associated users of the four projects we reviewed were clearly identified and defined in the DARs used to obtain project approval and in other project justification documentation. In addition, users of these projects were involved in project development activities through direct collaboration with CTO staff, user groups, and/or change control groups. Because the Postal Service is executing all the key practices associated with identifying business needs, it has increased confidence that its IT projects will meet both business needs and users’ needs. Table 6 shows the rating for each key practice required to implement the critical process for business needs identification at the stage two level of maturity and summarizes the evidence that supports these ratings. As a basic step in the direction of implementing mature stage two processes, an organization must develop a sound process for selecting IT proposals and projects. Once adequate resources are provided and an official is designated with responsibility for selecting proposals, stage two organizations establish a structured selection process. Resources required for selecting proposals typically include managerial time and attention, staff, and supporting tools and methodologies. Executives analyze and prioritize the proposals and make related funding decisions according to an established, structured process. The Postal Service has executed all of the key practices pertaining to selecting IT proposals: executives and managers follow established selection processes, the CFO has been designated with responsibility for the organization’s budget formulation process, adequate resources are being provided to support related activities, a structured process is in place for developing new IT proposals, and executives analyze and prioritize the proposals according to established selection criteria. Postal Service executives and managers follow established processes for selecting IT investments. Specifically, functional units, the Finance Department’s CAPE group, the Establish Team, and the organization’s enterprise-level investment boards all follow established processes for proposing, prioritizing, and selecting IT investments. Officials reported that the Establish Team operates in accordance with established management cycle processes supported by the organization’s CPS and that these processes, although not documented, are generally understood by members of the team. Finally, the CTO organization has developed selection criteria for that unit’s proposed IT investments that are incorporated in its new Business Case System (BCS). Table 7 shows the rating for each key practice required to implement the critical process for proposal selection at the stage two level of maturity and summarizes the evidence that supports these ratings. An IT investment portfolio is an integrated, enterprisewide collection of investments that are assessed and managed collectively based on common criteria. Managing investments within the context of such a portfolio is a conscious, continuous, and proactive approach to expending limited resources on an organization’s competing initiatives in light of the relative benefits expected from these investments. Taking an enterprisewide perspective enables an organization to consider its investments comprehensively so that the collective investments optimally address its mission, strategic goals, and objectives. This portfolio approach also allows an organization to determine priorities and make decisions about which projects to fund based on analyses of the relative organizational value and risks of all projects, including projects that are proposed, under development, and in operation. According to ITIM, critical processes performed by organizations at the stage three level of maturity include (1) defining portfolio selection criteria, (2) engaging in project-level investment analysis, (3) developing a complete portfolio based on the investment analysis, and (4) maintaining oversight over the investment performance of the portfolio. In addition, organizations with more than one board that selects IT projects for funding must align the authority of their IT investment boards. Although authority alignment is a critical process for the stage three level of maturity, we did not assess it in this study, because the Postal Service has a single set of organizationwide investment processes that apply to IT investments. Table 8 shows the purpose of each critical process in stage three. The Postal Service has executed many of the key practices associated with stage three critical processes. For example, the organization’s portfolio selection criteria are distributed throughout the organization, and they are reviewed and modified as appropriate. In addition, executives examine the mix of proposals and investments across portfolio categories in making funding selections. However, many key practices still need to be executed before the Postal Service can effectively manage its IT investments from a portfolio perspective. For example, the Postal Service has not defined the policies and procedures for any of the stage three critical processes. In addition, the Service has not developed portfolio selection criteria that adequately address cost, benefit, schedule, and risk. Until the Service fully implements critical processes associated with managing investments as a complete portfolio, it will not have ready access to the data needed to make informed decisions about competing investments. Table 9 summarizes the status of the Postal Service’s stage three critical processes, showing how many associated key practices it has executed. The following discussion provides information on the steps the Postal Service has taken toward implementing each of the critical processes. To manage IT investments effectively, an organization needs to establish rules or “selection criteria” for determining how to allocate scarce funding to existing and proposed investments. Thus, the process of developing an IT investment portfolio necessarily involves defining appropriate cost, benefit, schedule, and risk criteria for evaluating individual proposals for investments. To ensure that the organization’s strategic goals, objectives, and mission will be satisfied by the investments, the criteria should have an enterprisewide focus that reflects these strategic goals. Further, if an organization’s mission or business needs and strategies change, criteria for selecting investments should be reexamined at the portfolio level. Portfolio selection criteria should be disseminated throughout the organization to ensure that decisions concerning investments are made in a consistent manner and that this critical process is institutionalized. To achieve this result, project managers, organizational planners, and other decision makers should receive information on the organization’s selection criteria and address the criteria in IT proposals and business cases, project oversight activities, and strategic and business planning processes. Resources required for this critical process typically include the time and attention of executives involved in the process, adequate staff, and supporting tools. The Postal Service has executed four of the six key practices for this critical process. First, adequate resources are available to conduct portfolio selection criteria definition activities. Second, several working groups, including the Establish Team, are tasked with creating and modifying portfolio selection criteria. Third, portfolio selection criteria in the form of performance indicators and targets and program narratives that are required for budget formulation are distributed throughout the organization. Fourth, the Establish Team performs periodic reviews of the portfolio selection criteria and, in doing so, considers the organization’s current strategic goals and objectives, changing the criteria from year to year as required by current circumstances and priorities. Nonetheless, the Postal Service has yet to develop written guidance establishing procedures to be followed in creating, modifying, and using criteria for selecting a portfolio. Postal Service officials use annual performance plans, performance indicators and targets, and program narrative requirements as portfolio selection criteria. While these criteria are based on the Postal Service’s mission, goals, strategies, and priorities, they are not adequate because they do not address cost, benefit, schedule, and risk considerations in a manner that (1) provides sufficient and meaningful cost, benefit, schedule, and risk information to effectively assess investments and (2) would allow the Service to compare them against one another, prioritize them, and select those that best meet its needs and priorities. For example, program narratives do not include complete cost information. While expected capital costs are reported for the next 6 years, expense is only reported through the end of the current fiscal year. Further, the criteria do not include a weighting schema or other method that would allow the Establish Team to compare the risk-adjusted returns of competing investments. The CTO organization uses such criteria to prioritize investments and assist in making selection decisions. Without portfolio selection criteria that adequately address cost, benefit, schedule, and risk considerations, Postal Service officials have less assurance that they are selecting the mix of investments that best meets the organization’s needs and priorities. Table 10 shows the rating for each key practice required to implement the critical process for defining proposal selection criteria at the stage three level of maturity and summarizes the evidence that supports these ratings. This critical process ensures that all IT investments are consistently analyzed and prioritized according to the organization’s portfolio selection criteria, which should include cost, benefit, schedule, and risk considerations. According to ITIM, effective investment analysis requires, among other things, that (1) portfolio selection criteria have been developed; (2) cost, benefit, schedule, and risk data are assessed and validated for each investment; (3) the investment review board compares each investment against the organization’s portfolio selection criteria; and (4) the investment review board creates a ranked list of investments using the portfolio selection criteria. The Postal Service has executed two of the key practices in this area. First, the Postal Service has adequate resources for analyzing investments, including CAPE and other dedicated staff. Second, the Postal Service ensures that cost, benefit, schedule, and risk data concerning IT investments are validated. The Service does this in two particular instances: (1) during the development of the DAR, the document for approving capital projects, there is a validation step in which Finance Department staff independently verify the accuracy and integrity of the data presented and a validation memo is signed by the Controller to confirm that the data are correct; (2) as part of the annual budget formulation process, the data submitted on the various budget proposals are reviewed, and thus validated, by various levels of management up to the senior vice president of the functional unit sponsoring a proposal. Nevertheless, the Postal Service has a number of weaknesses in the way it analyzes investments for portfolio management. First, it does not have policies and procedures that sufficiently address this critical process. Its F-66 manual includes some procedures for analyzing proposed investments; however, it does not specify an approach for analyzing existing investments to make portfolio selection decisions. Nor does it describe a process to establish portfolio selection criteria that adequately incorporate cost, benefit, schedule, and risk considerations. In addition, it does not address capital projects that have been deployed for more than 18 months or ongoing infrastructure-type projects. Second, while investments are analyzed by executives during the approval process, through the review of quarterly status reports, and during the annual budget formulation activities, these investments are not assessed against portfolio selection criteria that adequately consider cost, benefit, schedule, and risk factors. The F-66 manual does not explicitly require the preparation of a risk assessment when the DAR is developed for a new investment. Further, when the Establish Team reviews budget documents as part of the annual budget formulation process, these documents do not provide sufficient information on cost, benefit, and risk to determine whether investments are progressing according to the approved DAR parameters. Table 11 shows the rating for each key practice required to implement the critical process for analyzing investments at the stage three level of maturity and summarizes the evidence that supports these ratings. At the stage three level of maturity, organizations design processes for developing an IT portfolio and develop written policies and procedures to ensure that projects are selected that best fit their strategic business direction, needs, and priorities. Each organization has practical limits on funding, the risks it is willing to take, and the length of time for which it will incur costs on a given investment before benefits are realized. To address these limits, stage three organizations group existing and proposed IT investments into predefined logical categories, for example, by cost or by type of investment (i.e., facilities or equipment). Once this is accomplished, organizations can compare investments and proposals within and across the portfolio categories and select the best overall portfolio for funding. According to ITIM, the portfolio development process cannot be performed well unless certain conditions are first satisfied, including (1) providing adequate resources for a portfolio development process; (2) appointing to IT investment boards people who exhibit core competencies in developing portfolios; (3) analyzing individual IT investments, including validating associated cost, benefit, schedule, and risk data; and (4) defining investment categories. Organizations should also create written policies and procedures for establishing and maintaining the portfolio development process. Assuming that this foundation is in place, the IT investment boards of stage three organizations assign each investment to a portfolio category, examine the mix of existing and proposed investments across these categories, and make selections for funding. Each IT investment board also establishes annual cost, benefit, schedule, and risk expectations for individual IT projects and gathers and validates data on actual performance. A repository of information on developing portfolios is established, updated, and maintained. Resources required for this critical process typically include staff, supporting tools for developing portfolios, and managerial time and attention to portfolio development. The Postal Service has executed six of the nine key practices for this critical process by providing adequate resources to implement this critical process; assigning competent managers to the board responsible for the portfolio development process; developing common portfolio categories; assigning IT programs and projects to portfolio categories on the basis of established criteria; examining the mix of proposals and investments across the common portfolio categories and making selection decisions for funding; and establishing, updating, and maintaining repositories of portfolio information. Postal Service officials reported that adequate management time and staff resources are available for this critical process. In addition, several systems are in use that support portfolio development activities, including PTRS, BCS, and CPS. Postal Service officials stated that the organization provides training in the use of these systems. Moreover, members of the Postal Service’s enterprise-level investment boards are senior-level executives who have had many years of experience in the organization and in working with the IT investment management process. The Postal Service also has defined common IT investment portfolio categories for the organization. The Postal Service’s IT investments are considered to relate either to corporatewide or functional unit activities and are further classified by funding type (capital or expense) and investment type (facilities, equipment, field, or other), as provided for in the organization’s F-66 manual and budget instructions. Postal Service programs and projects are now being assigned to portfolio categories based on the criteria described above. Further, the Establish Team examines the organization’s entire portfolio of IT investments annually and then selects programs and projects for funding. The Postal Service collects and stores information relating to the portfolio development process in a variety of forms ranging from IT project and systems inventories and finance/budget and corporate planning systems to manual backup books maintained by the Finance Department. Even though these important steps in stage three portfolio development have been taken, some weaknesses remain. The Postal Service has yet to develop written policies and procedures for establishing and maintaining portfolio information on its IT investments. The Postal Service has defined investment categories in its F-66 manual but has not developed written policies and procedures for establishing and maintaining portfolio information on IT investments. Moreover, even though the CTO organization monitors data on the performance of IT projects, the Establish Team does not perform complete analyses of the performance of individual investments or establish cost, benefit, schedule, and risk expectations for each investment annually. While the Establish Team reviews investments each year from a strategic planning and funding perspective, neither the analyses it performs nor the Investment Highlights reports on the projects provided to the Board of Governors adequately consider actual benefit and risk or contain complete information on cost. For example, the business case for the Surface-Air Management System includes information on over a dozen different types of qualitative benefits expected to be obtained by investing in that project. However, Investment Highlights reports provided to the Board of Governors only include information on the number of installations completed to date. In addition, although information on projects’ capital costs is included in Investment Highlights, information on operating expenses is not. As a result, information on these aspects of project performance is not routinely provided to the Board of Governors. Without complete cost, benefit, schedule, and risk data, Postal Service executives do not have the information needed to analyze and compare all investments and select those that best fit with the strategic business direction, needs, and priorities, of the organization. Table 12 shows the rating for each key practice required to implement the critical process for portfolio development at the stage three level of maturity and summarizes the evidence that supports these ratings. The purpose of this critical process is to ensure that each IT investment achieves its cost, benefit, schedule, and risk expectations. It builds on the critical process for IT project oversight at stage two by adding elements of benefit measurement and risk management to an organization’s investment control capability. Executive-level oversight of project-level risk and benefit management activities provides the organization with increased assurance that each investment will achieve the desired cost, benefit, schedule, and risk expectations. According to ITIM, effective oversight of portfolio performance requires, among other things, that the investment board (1) has access to up-to-date cost, benefit, schedule, and risk data; (2) monitors the performance of each investment in its portfolio by comparing actual project-level cost, benefit, schedule, and risk data to the predefined expectations for the project; and (3) corrects poorly performing projects. The Postal Service is executing six of the nine key practices for this critical process by providing adequate resources for monitoring and controlling IT project performance and giving investment boards access to data on actual and expected cost, benefit, schedule, and risk that are maintained in the organization’s IT project and system inventory. In addition, the CTO Investment Review Board provides oversight for all IT projects by monitoring these data and providing feedback on performance to sponsoring organizations. These oversight activities include working with IT project management teams to identify and address any development and deployment issues that may arise. Despite these strengths, however, the Postal Service has yet to develop policies and procedures that address performance oversight from a portfolio perspective. Moreover, while expectations are established in DARs or business cases that include cost, benefit, schedule, and risk, and the CTO organization monitors actual performance results, the Postal Service has not established a mechanism for revising expected benefit and risk expectations after its boards approve the investments or for notifying the Establish Team when an investment has not met cost, benefit, schedule, and risk expectations. Until the Postal Service executes all key practices associated with this critical process, senior executives will be less likely to determine whether the investments they have selected are delivering mission value at the expected cost and risk. Table 13 shows the rating for each key practice required to implement the critical process for portfolio performance oversight at the stage three level of maturity and summarizes the evidence that supports these ratings. Organizations that achieve the stage four level of maturity evaluate their IT investment processes and portfolios to identify opportunities for improvement. At the same time, these organizations are able to maintain the mature control and selection processes that are characteristic of stage three in the ITIM model. A key tool for accomplishing this critical process is the post-implementation review, in which outcomes of individual IT investments are compared to the organization’s plans and expectations. This review typically results in identifying lessons learned from the investment experience that are used by the organization to improve its understanding of the key variables in the investment’s business case. Analyzing a number of post-implementation reviews can also provide insights into the organization’s overall IT investment management process. This analysis is facilitated by classifying individual investments into logical categories and using the lessons learned to fine-tune associated processes, as well as aspects of the portfolio. In addition, at stage four maturity, organizations are capable of systematically planning for and implementing decisions to discontinue or deselect obsolete, high-cost, and low-value IT investments and planning for successor investments that better support strategic goals and business needs. Organizations acquire stage five capabilities when they create opportunities to shape strategic outcomes by learning from other organizations and continuously improving the manner in which they use IT to support and improve business outcomes. Thus, organizations at the stage five level of maturity benchmark their IT investment processes relative to other best-in-class organizations and conduct proactive monitoring for breakthrough information technologies that will allow them to significantly improve business performance. Table 14 shows the purpose of each critical process in stages four and five. The Postal Service is executing five of the thirty-four key practices associated with the five critical processes in stages four and five. For example, it has policies and guidance for conducting post-implementation reviews and provides training to individuals involved in these activities. The Postal Service also provides resources for identifying opportunities for IT-driven strategic business change. However, it does not regularly capture lessons learned from post-implementation reviews, the performance of its portfolio, or benchmarking in order to improve its investment processes. In addition, it does not actively manage the succession of its IT systems or investments. Until it implements stage four and five critical processes, the Postal Service will not be positioned to effectively improve its IT investment management processes and successfully leverage IT to improve business outcomes. Table 15 summarizes the status of the Postal Service’s critical processes for stages four and five and shows how many associated key practices it has executed. The following discussion provides information on steps the Postal Service has taken to implement each of the critical processes. Post-implementation reviews are performed (1) to examine differences between estimated and actual investment costs and benefits and possible ramifications for unplanned funding needs in the future and (2) to extract lessons learned about the investment selection and control processes that can be used as the basis for management improvements. Investments that have completed development and those that were terminated before completion should be reviewed promptly to identify potential management and process improvements. According to ITIM, this critical process involves identifying the projects to be reviewed; initiating reviews and developing policies and procedures for conducting the reviews; and ensuring that quantitative and qualitative data are collected, evaluated for reliability, and analyzed during the course of the reviews. The Postal Service has executed three of the six key practices required to implement the critical process for post-implementation reviews. First, the CTO organization and Finance Department have each developed policies and procedures for performing post-implementation reviews. These include the CTO organization’s Program Management Process Guidelines and the Finance Department’s National Cost Study Process. Second, according to Postal Service officials, the Service has adequate resources to perform review activities. Third, Postal Service staff are trained in conducting post-implementation reviews. The Postal Service, however, has several weaknesses in this critical process. First, no investment board has been assigned responsibility for (1) identifying projects for which post-implementation reviews are to be conducted and (2) ensuring that post-implementation reviews are initiated. Second, the Postal Service has no institutionalized process for routinely (1) identifying projects for which post-implementation reviews are to be conducted, (2) collecting quantitative and qualitative data while performing post-implementation reviews, and (3) developing lessons learned and improvement recommendations about the investment process and capturing them in a written product or knowledge base. This is evidenced by the fact that, while the Finance Department’s Program Performance Group is responsible for conducting post-implementation cost studies, only three of them have been performed since 1990. Until the Postal Service implements an institutionalized process for routinely performing post- implementation reviews, senior executives will lack key information needed to improve the performance of the IT investment portfolio as well as the investment management process. Table 16 shows the rating for each key practice required to implement the critical process for post-implementation reviews at the stage four level of maturity and summarizes the evidence that supports these ratings. Stage four evaluations of portfolio performance enable organizations to determine what contribution their collected pools of IT investments are making to mission goals and needs. Evaluations of this sort are similar to post-implementation reviews involving individual projects, but different in that they apply to entire IT investment portfolios. This critical process seeks to determine how well IT investments are helping to achieve the strategic needs of the enterprise, satisfying the needs of individual units and users, and improving business performance through IT. Performance information for an organization’s entire portfolio of investments has to be compiled and analyzed and trends examined. Developing baseline performance data is critical to making this a meaningful exercise. According to ITIM, the process of addressing problems and opportunities for improving the investment process and the investment portfolio usually involves developing written policies and procedures for the investment management process, creating recommendations for the IT investment board, documenting the decision criteria used to measure portfolio performance, deciding whether or not to implement each recommendation, and tracking the progress made. Resources required for this critical process typically include staff support, methods and tools to aid the teams conducting post-implementation reviews, and current and historical portfolio data. To advance to the stage four level of maturity, an organization must first ensure that all of the prerequisites, commitments, and activities that are characteristic of levels two and three have been put into place. The next step is to develop written policies and procedures for evaluating and improving its IT investment portfolio that include defining requirements for measuring performance data. Cost, benefit, schedule, and risk must all be fully considered to enable an organization to construct a picture of the overall performance of its IT investment portfolio. The Postal Service is not executing any of the six key practices for this critical process. First, while the Establish Team reviews existing and proposed IT investments each year as a part of the organization’s budget formulation process, no evaluations are being done that are designed to identify opportunities for improving portfolio performance. Also lacking are written policies and procedures that define the organization’s key measures and the methods used to assess portfolio performance, evaluation methods, reporting requirements, and other applicable policies and procedures. Because the Postal Service has not collected data for this critical process, including baseline performance information on its IT portfolio, it is more difficult to perform evaluations that could result in recommendations for improving its process for selecting a portfolio. Table 17 shows the rating for each key practice required to implement the critical process for evaluating and improving the performance of the portfolio at the stage four level of maturity and summarizes the evidence that supports these ratings. Managing the succession of systems and technology entails periodically evaluating IT investments to determine whether they should be retained, modified, replaced, or otherwise disposed of. According to ITIM, system and technology succession management includes (1) defining policies and procedures for managing the IT succession process, (2) assigning responsibility for the succession management process, (3) developing criteria for identifying IT investments that may meet succession status, and (4) periodically analyzing IT investments to determine whether they are ready for succession. This critical process enables the organization to recognize low-value or high-cost IT investments and augments the routine replacement of systems at the end of their useful lives. This critical process supports the development of a forward-looking, solution-oriented view of IT investments that anticipates future resource requirements and allows the organization to plan appropriately. The Postal Service has not performed any of the nine key practices required to implement this critical process. For example, while the Postal Service’s project management guidelines define procedures for retiring investments, they do not describe how to review systems regularly to identify candidates for retirement. According to officials from the CTO organization, decisions on succession management are usually made between business unit managers and CTO office staff (e.g., portfolio managers), but no individual or group has been assigned responsibility for managing the succession process from an enterprise perspective, which would allow the Postal Service to better anticipate future resource requirements. Finally, the Postal Service has neither defined the criteria for identifying investments that may meet succession status nor taken steps to regularly analyze IT investments for possible succession. According to CTO organization officials, the Postal Service has retired or replaced roughly 250 systems since 1998. However, this was not done within the structure of an institutionalized succession management process. Postal Service officials have stated that IT investments are reviewed, for example, during the annual budget formulation process to analyze them for possible succession. However, without an institutionalized process for succession management, the Postal Service may not be able to identify those IT investments that are eligible for succession in enough time to minimize the effect of the transition on their successors. Table 18 shows the rating for each key practice required to implement the critical process for managing the succession of systems and technology at the stage four level of maturity and summarizes the evidence that supports these ratings. In stages two through four, organizations ensure that sound investments are selected, controlled, and evaluated within the context of the IT investment management process and the enterprisewide portfolio. In the stage five level of maturity, a shift in orientation occurs as organizations evolve toward using information on leading technologies to identify opportunities for business change and to implement changes in their overall business process. Benchmarking the investment process allows organizations to identify opportunities for improvement and to implement measurable improvements in their IT investment management processes so that these processes meet or exceed those used by best-in-class organizations. Improvements can include using innovative investment oversight tools and techniques or improving the feedback mechanisms for lessons learned. According to ITIM, investment process benchmarking includes (1) defining policies and procedures for using benchmarking to improve the IT investment management process, (2) collecting baseline data on the organization’s current IT investment management process, (3) identifying and benchmarking external comparable best-in-class processes for IT investment management, and (4) improving the organization’s investment management processes. The Postal Service has not fully executed any of the seven key practices required to implement this critical process. While there have been some efforts to identify best practices from best-in-class organizations and incorporate these practices into the Postal Service’s IT investment management processes (such as the CTO organization’s use of lessons learned in benchmarking to develop the BCS), the Postal Service has not defined policies and procedures for improving the IT investment management process using benchmarking. It also does not have any institutionalized processes to routinely (1) collect baseline data on the organization’s current IT investment management process, (2) identify and benchmark external best-in-class processes for IT investment management in comparable organizations, or (3) actually improve the organization’s investment management processes. Without these processes, the Postal Service is less likely to learn from best-in-class organizations, which will hinder any concerted effort to improve its IT investment management processes. Table 19 shows the rating for each key practice required to implement the critical process for investment process benchmarking at the stage five level of maturity and summarizes the evidence that supports these ratings. Information technologies can provide opportunities for an organization to move dramatically in new directions to meet its goals. Thus, once an organization finds it can competently manage its enterprisewide portfolio of investments, it should actively seek out opportunities to use alternative technologies. According to ITIM, stage five organizations provide adequate resources for conducting IT-driven activities that can result in strategic business change. These may include developing an advanced IT laboratory, test center, or library; conducting technical research; employing internal staff and external experts or reviewers; and obtaining supporting tools. Stage five organizations also develop applicable written policies and procedures and designate an official to oversee their implementation. The central focus of these activities is to follow technological events and to identify and evaluate technologies that appear to offer strategic business-changing capabilities. Once a conclusion has been reached that specific technology offers the organization significant opportunities, senior managers plan for and implement changes to the organization’s business processes. Organizations at a stage five level of maturity may create an advanced technology group, a cross-departmental group of experts, or technology centers of excellence. Finally, to strengthen management on these types of activities, mature organizations designate responsibility for this key practice to a single senior-level manager. The Postal Service has executed two of the six key practices required to implement this critical process by designating responsibility to specific organizational units to support activities aimed at IT-driven strategic business change and by providing a range of related resources. However, steps have yet to be taken to execute the remaining key practices, including creating and maintaining a knowledge base of state-of-the-technology IT products and processes; actively identifying technologies with business- changing capabilities; and planning and implementing strategic changes to business processes on the basis of the capabilities of these technologies. The Postal Service has assigned responsibilities to several units that could leverage IT to implement strategic business change, including its Transformation Plan Office, Office for Strategic Planning, and the CTO organization. Also, within the CTO organization, the Information Technology unit has established the positions of Enterprise Architect and Manager of Technology Standards. To ensure standardization, the Postal Service has also developed the IT Infrastructure Toolkit process and established the Enterprise Architecture Councils and the Management Steering Committee. The Postal Service is also providing a range of resources that could be used to support the critical process of IT-driven strategic business change. The Service is funding a testing laboratory and has established Integrated Business Solutions Systems Centers and developed an IT Toolkit system and associated processes. The IT Toolkit system serves as a repository of information on technologies and application systems that have been approved for use within the Postal Service. In addition, the Postal Service’s CTO organization is taking several steps to initiate changes to the business process based on currently available state- of-the-practice IT approaches. First, the CTO has developed a plan for a corporate database called the Corporate Data Mart, which could serve as a repository of data from 35 separate Postal Service systems. According to Postal Service officials, the CTO organization is working with each functional unit to determine which legacy systems will transition to the data mart and plans to incorporate future systems in the data mart. This transition may eliminate costly legacy systems or avoid the investment cost to replace them. The CTO organization is sponsoring the Advanced Computing Environment initiative to transition to a less costly distributed computing environment. According to officials, under this approach, activities will be standardized, centralized, and reengineered such that the costs per Postal Service user will be reduced. These accomplishments can be helpful to the Postal Service, particularly in light of its financial difficulties and the need to identify new, more cost- effective ways of accomplishing its mission. By continuing to foster a more coordinated approach to using IT investments to achieve its business goals, using resources from across the organization, and disseminating information that is gathered more broadly, the Postal Service can more effectively capitalize on opportunities uncovered by efforts already underway. Table 20 shows the rating for each key practice required to implement the critical process for IT-driven strategic business change at the stage five level of maturity and summarizes the evidence that supports these ratings. Information technology provides key core operational capabilities that the Postal Service must rely on to achieve its mission. Only by effectively and efficiently managing its IT resources can the Postal Service gain opportunities to further leverage its IT investments and make better allocation decisions among many investment alternatives. The Postal Service has in place most of the foundational practices required to ensure that IT investments are being selected and monitored to support its overall objectives. A comprehensive process guide for investment management and written policies and procedures for management oversight of investments will allow the Postal Service to better coordinate its IT investment activities and ensure that they are performed consistently. Once the Service has fully implemented all the critical processes for stage two, it will have the controls necessary to allow it to effectively manage its IT investments. The Postal Service shows mixed progress in managing its IT investments as a portfolio. The Service performs many portfolio development and oversight activities. However it lacks policies and procedures for managing its portfolio. It has not defined criteria that allow it to effectively analyze, prioritize, and select its investments from a portfolio perspective. In addition, the Postal Service’s reporting of performance data is largely limited to capital projects, which are a smaller portion of its portfolio than are operating expenses. Until the Service fully implements critical processes associated with managing investments as a complete portfolio, it will not have ready access to the data it needs to make informed decisions about competing investments. The ability of the Postal Service to continue to improve its investment management process is contingent on its ability to learn from its current practices and investments and from other organizations. The Service currently has no institutionalized processes to learn from its own experience and from other organizations. Such processes can contribute to the long-term success of the Postal Service’s IT portfolio and support its mission. To strengthen the Postal Service’s capabilities for investment management and address the weaknesses discussed in this report, we recommend that the Postmaster General develop a plan that initially focuses on correcting the weaknesses in critical processes associated with maturity stages two and three before addressing the weaknesses at maturity stages four and five, because critical processes at the lower stages provide the foundation for building those at higher maturity stages. The plan should be developed within 6 months. At a minimum, the plan should specify an approach to develop comprehensive guidance that defines and describes the complete investment management process, unifies existing processes enterprisewide, and reflects changes in processes as they occur; develop additional process guidance, as needed, to completely define the operations and decision-making processes of investment boards and other management entities involved in managing IT investments; ensure that cost, benefit, schedule, and risk expectations are set and approved in the original business case for each investment; that accurate and complete actual cost, benefit, schedule, and risk data are tracked against these expectations; and that status information on these four criteria is periodically reported to executive-level investment boards; and establish a structured, transparent, and documented portfolio selection process that assesses, prioritizes, selects, and funds investments according to established portfolio selection criteria, including explicit cost, benefit, schedule, and risk criteria. The Postmaster General should ensure that the plan specifies measurable goals and time frames, prioritizes initiatives, designates a senior manager responsible and accountable for directing and controlling the improvements, and establishes review milestones. After addressing the stage two and three processes, the Postal Service should create processes required for stages four and five that, at a minimum ensure that guidance for conducting post-implementation reviews is complete, including criteria for selecting systems for review, and that post-implementation reviews are conducted on all appropriate systems; establish a process for evaluating and improving portfolio performance; establish a process for managing the succession of systems and technology; establish a process to benchmark the investment processes of leading organizations to identify opportunities for improvement; and establish a process to employ IT investments strategically to improve business outcomes. The Postal Service’s Chief Financial Officer provided written comments on a draft of this report (reprinted in app. III). In these comments, the Postal Service stated that the report offered an opportunity to consider changes and improvements in its IT investment management processes. The Service added that it would carefully evaluate each of the report’s recommendations to determine the necessary actions for adopting and integrating key practices outlined in the GAO ITIM model that are appropriate for the Postal Service. The Postal Service also identified key points where it stated that it differs from GAO’s IT investment management framework. The Postal Service also explained that it uses a hierarchy of delegations to select and oversee its investments, from the Board of Governors through the lowest level of management, to ensure that senior management can concentrate on strategic issues and the most significant projects. We did observe this structured approach to the selection and oversight process and have recognized it in our report. In succession planning, the Postal Service stated that it uses an institutionalized portfolio approach to address the succession of its IT hardware, software, and systems. According to the Postal Service, this approach enables senior management to determine strategically driven solutions based on priorities, lessons learned, available technology, best practices, affordability, risk assessments, and business needs. Our guidance suggests that, while each of these aspects may be appropriate as part of a succession management process, effective succession management entails regularly reviewing the performance of existing systems against established criteria. Such a process allows an organization to identify systems that should be retained, modified, replaced, or otherwise disposed of in a timely manner. However, as we stated in our report, the Postal Service does not have such a process. The Postal Service provided comments pertaining to post-implementation reviews that describe cost studies, the budget process, and the activities of the Office of Inspector General (OIG) as satisfying this critical process. We disagree with the Postal Service in this matter. While guidance for cost studies does exist, the Service provided evidence of only three post- implementation cost studies having been conducted since 1990. The Postal Service’s budget process does not satisfactorily address this critical process. Specifically, the budget process does not capture lessons learned and disseminate them to other projects and work processes in order to improve them, and this is a major objective of post-implementation reviews. Finally, while OIG does conduct evaluations from which lessons learned may be drawn and used to improve other projects and work processes, OIG evaluations are not part of the regular systems life cycle.
The U.S. Postal Service invests hundreds of millions of dollars in information technology (IT) each year to support its mission of providing prompt, reliable, and efficient mail service to all areas of the country. It must support these operations through the revenues it earns for its services. Growing operating expenses and capital needs in the face of reduced revenues highlight the need for the Postal Service to invest its IT dollars wisely. Accordingly, the Senate Committee on Governmental Affairs and its Subcommittee on International Security, Proliferation, and Federal Services asked GAO to evaluate how well the Postal Service manages its IT investments. The Postal Service has in place many of the foundational capabilities required for managing IT investments described in GAO's IT Investment Management framework, illustrated below. Proposed major projects go through established review processes and must be approved at a high level before being implemented. Control processes also are in place. Although the Postal Service evaluates proposed IT projects before investing in them, it does not fully manage these investments from a portfolio perspective by assessing projects on the basis of indicators that clearly link performance to initial selection criteria. Such a portfolio approach would enable the Postal Service to consider proposed projects along with those that have already been funded and to select the mix of investments that best meets its mission needs. The Postal Service has not yet attained the key attributes associated with most capable organizations, such as evaluating the performance of investments as a whole, capturing "lessons learned," and institutionalizing these lessons to benefit the organization. Until it addresses areas such as these, the Postal Service will not be in a position to continually improve its investment process and leverage its IT capabilities for strategic outcomes.
According to research, comprehensive early intervention programs can positively affect the progress of children with developmental delays and children at risk of having a disability. Services provided by these programs may include speech language therapy, family counseling, and home visits. Research has linked early intervention services to improvements in toddlers’ behavior, interactions between parents and children, infant development, and overall quality of life for children and their families. Additionally, research has found increased mental development and better vocabulary and reasoning skills for children who received early intervention services when compared with those who did not receive these services. Findings from the National Early Intervention Longitudinal Study (NEILS), a research project sponsored by the Department of Education, has found that parents report a high degree of satisfaction after receiving 3 years of early intervention services, reporting that their families are better off and that early intervention services are having “a lot” of impact on their child’s development. IDEA is the primary federal education law for infants, toddlers, children, and youth with disabilities. Grants to states for early intervention services and special education and related services for children with disabilities and their families are provided mainly through Parts C and B of the act. These parts have different histories and are generally administered by different agencies at the state level. IDEA Part C was established to ensure that infants and toddlers, from birth to age 3, with disabilities or at risk of developing a disability, and their families receive appropriate early intervention services. Part C focuses on, among other things, enhancing the development of infants and toddlers with disabilities by providing services in a natural environment, such as the home or a child care center. This part of the law seeks to improve the capacity of the family to meet the child’s needs and reduce educational costs by minimizing the need for special education when the child is older. Part B, in contrast, requires that services, to the extent possible, be provided in educational settings, such as regular classrooms. Part B, which includes state grants for children and young adults ages 3 through 21, and Part B Section 619 preschool grants for children 3 through 5, aims to ensure that children with disabilities have access to a free appropriate public education. Funding for Part B is significantly larger than for Part C programs. In fiscal year 2004, Part C was funded at $444 million, and approximately 279,000 infants and toddlers received services. In contrast, Part B state grants and the Section 619 supplement for preschool services were funded at $10 billion and $388 million, respectively, in 2004. Approximately 6 million children were provided services under Part B state grants, and over 693,000 children were provided preschool services under Part B Section 619. To meet Part C goals, states use funds to develop a statewide, coordinated, multidisciplinary, interagency system of early intervention services for infants and toddlers with disabilities and their families. Developing such a system includes designating a lead agency, preparation and dissemination of materials on the availability of services, defining eligibility criteria, and delivering services. To this end, each state has a designated lead agency responsible for the administration, supervision, and monitoring of Part C. In contrast to Part B, which is led by state education departments, Part C is led by the health department in 16 states, education departments in 11 states, and other departments, including combined health and human services departments, in the remaining 23 states. States are expected to leverage funding, services, and resources from other sources to provide early intervention services. Each state must have a continuous process of public awareness activities and evaluations designed to identify and refer as early as possible all young children with disabilities and their families who are in need of early intervention services. By law, public awareness efforts should include disseminating information to parents on available early intervention services and to all primary referral sources, especially hospitals and physicians. Efforts may also include television ads, pamphlets, and posters describing IDEA Part C and how parents can access services for their child. Once a child is referred and suspected of having a disability, states are required to conduct an evaluation to determine if the child meets the state’s eligibility criteria. In order to be eligible for federal funds under Part C, IDEA requires that states provide services to any child under 3 years of age who is developmentally delayed. These delays must be measured by appropriate diagnostic instruments and procedures or validated by professional opinion, and may occur in one or more of the areas of development—including cognitive, physical, communicative, social or emotional development, and adaptive behavior, such as feeding or toileting. States must also provide services to those children that have a diagnosed mental or physical condition that has a high probability of resulting in developmental delay. However, states are free to define what constitutes a developmental delay and specify how this will be measured. In addition, states may choose to serve children who are at risk of having a substantial developmental delay. These may include biological risks, such as low birth weight, and environmental risks, such as parental substance abuse. Once an eligible delay has been detected, service coordinators work with parents and others to match children with services specific to their needs. Part C requires that every state make certain services available, including special therapies such as physical, occupational, or speech language therapy, and family supports such as home visits. For example, an occupational therapist may come to a child’s home to teach a child to draw, which involves hand and eye coordination. The law also requires that services be provided in children’s natural environments. Figure 2 illustrates the typical process in early intervention programs. Children eligible for Part C can receive early intervention services until they turn 3 years of age. Part C funds can be used to provide services to children from their third birthday to the beginning of the following school year, but as of 2004 only 14 states have adopted such a policy. Thirty states allow for the use of Section 619 preschool funds to provide services to children before their third birthday. As a child approaches age 3, the local education agency (LEA) determines the child’s eligibility for Part B Section 619 preschool services. If eligible for Part B Section 619, the child might also be eligible for extended school year services. An extended school year ensures that a child can continue receiving services even when schools are not in session, for example, during the summer. According to Education, most children under Part B do not receive extended school year services. By contrast, Part C is a year-round program. Eligibility for an extended school year is determined on an individual basis and is generally based on how much a child will regress and the time it will take to regain lost skills. During the most recent reauthorization of IDEA, in 2004, Congress gave states the option of allowing children to continue to receive services under Part C until they become eligible for kindergarten. States vary in both the criteria used to establish eligibility for services and the means used to assess whether children fit these criteria, but these differences are not consistently related to the percentage of children receiving early intervention services. While Part C is intended to serve infants and toddlers from birth to age 3, the majority of children receiving services nationwide and in most states are toddlers between ages 2 and 3. Officials in states we visited told us that despite their various public awareness efforts, there are a number of challenges in identifying all children eligible for services, specifically reaching children whose families speak limited English or live in rural areas. Comprehensive data on the number of children who could benefit from early intervention are not available; many conditions covered by Part C—such as emotional disorders and learning disabilities—are not systematically tracked. Nationwide, states’ eligibility criteria for Part C services vary, with most states specifying the amount of delay in development a child must experience to be eligible for services, while a few rely exclusively on the judgment of a multidisciplinary clinical team. IDEA generally gives states the discretion to determine specific eligibility criteria and diagnostic procedures. For example, Part C specifies that a child have an established condition that has a high probability of resulting in a developmental delay, or that a delay is present in one or more areas of development—cognitive, physical, communicative, social or emotional, or adaptive—and that all states allow for the use of informed clinical opinion in their evaluation. However, states can determine the amount of delay a child must experience in order to be eligible for services. Part C also gives states discretion to identify the appropriate diagnostic instruments to measure the extent of a child’s delay or to rely exclusively on the informed opinion of professionals. For example, Arizona requires a 50 percent delay in one or more aspects of early childhood development, such as physical or emotional development. New Jersey’s eligibility criteria vary depending on the number of areas in which a child is developmentally delayed. The state requires that children have a 33 percent delay in one area of development, but a 25 percent delay in two or more areas of development. The Centers for Disease Control and Prevention noted that the significance and implication of a given percentage delay vary across areas of development. For instance, according to CDC, a 25 percent delay in motor skills development has much different implications for services for a child than a 25 percent delay in language development. Other states’ eligibility criteria are based on the number of months or standard deviations from age norms. For example, in Massachusetts, a 24-month-old child functioning at an 18-month-old level could be eligible for services. In Georgia, a child whose cognitive abilities are at least two standard deviations less than the abilities of most children at the same age would be eligible for services. Hawaii does not specify a percentage delay and instead relies on the judgment of a multidisciplinary team, which generally includes either a speech therapist or special educator and an occupational or physical therapist. Despite wide variation in how states define eligibility, variation among states in the percentage of children served is not consistently explained by eligibility criteria. For example, Alabama, which has broad eligibility criteria (25 percent delay in one or more areas) served only 1.3 percent of infants and toddlers in 2004, while North Dakota, which has stricter eligibility criteria (50 percent delay in one area, 25 percent delay in two or more areas, informed clinical opinion), served 2.8 percent of its infants and toddlers. In 2004 the percentage of children served from state to state ranged between 1.3 and 7.1 percent. Although not required by Part C, as of March 2005, 8 states—California, Hawaii, Indiana, Massachusetts, New Hampshire, New Mexico, North Carolina, and West Virginia—also served children at risk of having a substantial developmental delay. For example, in Hawaii, children from families where child abuse or neglect is present may qualify for services. In Massachusetts, children born with low birth weight or chronic lung disease may qualify for services. States that we visited that do not serve at- risk children—Colorado, Illinois, Maryland, New Jersey, and Oregon— expressed interest in serving them but told us that the additional costs associated with increasing the number of eligible children prevented them from doing so. Instead of providing services to at-risk infants and toddlers under IDEA Part C, some states track at-risk children or provide services to them through other programs. For instance, in Ohio, children at risk are served through a statewide program, funded in part by federal dollars, known as Ohio Early Start. Through this program, they receive services similar to those children receive under Part C. While Part C funding is intended to serve infants and toddlers from birth to age 3, the majority of children receiving services are toddlers between ages 2 and 3. In 2004, infants, children under the age of 1, constituted only 14 percent of the approximately 279,000 children served nationwide, and 2 to 3-year-olds accounted for 54 percent. Likewise, in 38 states, the majority of children served were 2 to 3-year-olds. In Maryland and Illinois, 2 to 3- year-olds made up 54 percent and 55 percent of the children served, respectively. OSEP and state officials told us that a majority of children enter the Part C system after age 2 because this is the age at which speech language delays become apparent and indicated that such deficiencies are not easily detected in younger children. According to Education officials, difficulty detecting deficiencies in younger children is due to numerous factors, including difficulties in assessment, pediatrician or parent “wait and see” attitudes, and lack of parental consent. Children who enter the Part C program in infancy are generally those diagnosed at birth with conditions such as chromosomal abnormalities and genetic or congenital disorders. It also appears that many children are eventually identified as needing services when they become older. Part B Section 619, which serves children ages 3 through 5 years, serves many more children than Part C, as shown in figure 3. In 2004, Section 619 served over 693,000 children, compared with approximately 279,000 children under Part C, and this pattern is mirrored in most states. This may be attributable to a variety of factors. Some delays become more apparent as children get older. Developmental delays are also more likely to be detected once a child enters a group setting, such as a preschool or kindergarten program, when comparison with peers may highlight some delays. Additionally, some parents may turn to private insurance to pay for services during the first few years of a child’s life, and enter the IDEA system when their child enrolls in a formal education program at ages 3, 4, or 5. However, Massachusetts and Hawaii serve at least the same number of children in their Part C programs as they do in their Part B Section 619 programs. Both states include at-risk children in their Part C eligibility criteria. Officials in the 7 states we visited told us that a number of obstacles prevented them from reaching all children, even though all of these states, as required by law, had developed public awareness campaigns to help identify infants and toddlers in need of services. To inform the public of the program, states used television, radio, and newspaper ads; presentations at community fairs; and distribution of pamphlets and brochures at doctors’ offices, hospitals, and other appropriate locations. For example, in one of the sites we visited, posters were developed to hang in doctors’ offices across the state to help inform parents about Part C. Despite their public awareness campaigns, the states we visited reported having difficulty reaching all eligible children. Officials noted that it can be especially difficult to reach families for which English is a second language. While some states we visited produced public awareness materials in Spanish, they had not expanded their efforts to include materials in other languages. Officials also told us that it can be hard to reach families who live in rural areas because they may visit a pediatrician less frequently because of the long distance they must travel to get to the doctor. While officials in 6 of the 7 states we visited noted that physicians were the principal source of referrals, they also told us that they believed physicians were hesitant to make referrals to Part C programs because of a fear of misdiagnosing a child with a disability. They believed that a misdiagnosis could cause unnecessary anxiety in a parent whose child is developing more slowly but would eventually begin to demonstrate age- appropriate skills without needing early intervention services. Additionally, the American Academy of Pediatrics found through its own studies that a lack of understanding of the early intervention program’s processes and procedures is a barrier to physicians’ referring children. States provide a broad array of early intervention services to eligible children and face similar challenges in recruiting and retaining staff to provide these services, but they vary in the sources of funding they draw from. States provide a wide range of medical and educational services to children and their families and rely on professionals, including occupational therapists, physical therapists, and speech language pathologists, to deliver these services. Yet officials in the states we visited reported that they are finding it increasingly difficult to recruit and retain these individuals. To fund early intervention services for children from birth to age 3, states relied on funding from multiple sources, including federal, state, and private funding. However, some states reported difficulties accessing certain types of funding, such as Medicaid. As required under Part C, states provide a broad array of early intervention services to infants and toddlers. Under Part C, infants and toddlers with a disability are entitled to receive an evaluation of their strengths and needs, service coordination, and support for a smooth transition from early intervention to preschool programs. In addition, children receive individualized services that may include physical therapy, family counseling, and nutrition services. States, as required by law, reported making all services shown in figure 4 available to infants and toddlers. Family training and counseling and home visits Figure 5 shows that the most frequently received services nationwide are speech language therapy, special instruction, physical therapy, and occupational therapy. Psychological and nutrition services were among the least frequently provided. The states we visited were similar in their mix of services. For example, in states such as Maryland, Oregon, and Colorado, speech language, physical, and occupational therapy, to help with skills like feeding, walking, and talking, were the most frequently provided services, and services such as psychological services and nutrition services were rarely provided. These services were provided in a variety of settings, including the home, hospital, and day care, and through public and private service providers. For instance, according to Maryland officials, LEAs, departments of health, and departments of human services in the state provide services to infants and toddlers in addition to private providers. In Massachusetts, a network of private programs provides early intervention services under contract with the state. Officials in each of the states we visited reported challenges in recruiting and retaining staff to provide early intervention services. Specifically, speech language pathologist and occupational therapist were the most difficult positions to fill. Officials cited several reasons for these challenges. Early intervention staff are required by Part C to serve children in natural environments, such as homes or child care centers. This requires staff to travel to these locations, which can be time-consuming and costly. For instance, in Hawaii, state officials told us that it is hard to schedule services for children in neighboring islands because of the long travel times to reach them. Additionally, state officials told us that salaries earned by early intervention contractors were not always competitive with salaries and benefits available in the private health care sector. These challenges make it difficult for some early intervention programs to hire professional staff. Understaffed programs can often result in heavier caseloads in which children do not receive services or receive services less often than intended. To help pay for services for infants and toddlers, states draw on a range of federal, state, and local funding sources. As shown in figure 6, states accessed funds from a variety of sources at the federal level, including the Child Care and Development Block Grant, IDEA Part B, and Medicaid, and from the state level. See appendix I for a glossary of these federal and state funding sources. State general revenue funds represent the most frequently used funds by states after federal Part C dollars. All 50 states reported using state general funds. For most states we visited, local support represented a small proportion of reported early intervention funding, but in one—Maryland—it accounted for 51 percent. However, states did report receiving funds from local sources or through private insurance and fees collected from a child’s family. For example, New Jersey charges a sliding monthly fee based on family size and income relative to federal poverty guidelines. State officials said families that can afford to contribute to the cost of service provision do so, but families that cannot afford the fee still receive services. In fiscal year 2003, New Jersey collected $43,862 in revenue from this fee, which made up less than 1 percent of its reported early intervention service funding. In 4 of the 7 states we visited, states provided most of the funding for services for infants and toddlers, and Part C represented a smaller percentage of total funding. For example, in Illinois, state general revenue funds represented 57 percent of the total funding reported for infants and toddlers with disabilities, and Part C funds represented 17 percent. However, Part C represented a larger percentage of reported funding in certain states. For instance, in Colorado, Part C funds made up 38 percent of funds reported for infants and toddlers with disabilities. See table 1 for funding sources in the states we visited. Beyond our collection of funding data in our seven site visits, we looked at funding data for all 50 states by examining the information states provided to OSEP as part of their annual performance reports. Their data included federal, state, and local funding sources, as well as the dollar amounts for each. However, during the course of our review, we found that data were incomplete. For instance, Hawaii did not report funding for two programs that provide early intervention services. We found similar gaps in examining funding data reported to OSEP by additional states. During the course of our review, OSEP concluded the funding data from states were unreliable and announced plans to stop collecting such data. States we visited reported challenges in accessing certain funding sources. For some smaller programs and funding sources, officials in some states we visited said the paperwork was too cumbersome for the small amount of funding they might receive in return. In other cases, some officials reported difficulty obtaining Medicaid reimbursement for Part C services. In Oregon, where the state department of education is the lead agency, officials explained that the different terminology educators use to describe certain needed services makes it hard to access Medicaid for early intervention services. For instance, Medicaid may pay for occupational therapy if the purpose is health-related in nature—such as teaching a child to eat. But Medicaid may not provide reimbursement if the stated purpose of the therapy appears educational, such as teaching a child to grasp a crayon to draw. Despite the challenges some states reported, Massachusetts officials cited a strong and collaborative working relationship with Medicaid and private insurance. For example, since 1985, the state has had operational standards that include reimbursement of virtually all Part C services through Medicaid. OSEP monitors the states, which in turn oversee local Part C programs by examining data on how well programs identify, serve, and transition children to other programs when they are too old for Part C. In its oversight, OSEP tracks data on program performance submitted by states through annual performance reports and other mechanisms. As part of its efforts, OSEP uses two key performance indicators—percentage of infants and toddlers receiving early intervention services and the percentage of these children receiving services in natural environments—to target site visits and technical assistance to programs most in need of guidance. States oversee Part C in similar ways but are free, within certain parameters, to design their own oversight strategies. Although federal and state data and oversight efforts have helped identify some performance problems, challenges remain in transitioning children from Part C to Part B Section 619 and other follow-on preschool programs. In 5 of the 7 states we visited, officials said that some children who turn 3 during the summer and are eligible for Part B preschool experience service gaps when school is not in session. OSEP does not have data on how frequently children are provided extended year services during the summer months. To ensure that programs are managed well and that eligible infants and toddlers receive the services they need, OSEP monitors the states by collecting and tracking key data. Specifically, each state submits an annual performance report to OSEP, which includes a narrative on five areas of program performance and plans for improvement. States report on (1) what they are doing to identify children and the effectiveness of these efforts; (2) how well they are helping families develop the skills they need to help their children; (3) whether services are provided to children in a natural environment, such as, home, day care, or other programs for typically developing children; (4) whether transition planning is available to children and their families; and (5) what they are doing to supervise and manage local programs. States report on progress or challenges in meeting performance goals and state-developed indicators as well as projected timelines, activities, and resources needed to achieve future targets. For example, with respect to identifying all children eligible for services, Illinois set a goal for the period covering July 2003 to June 2004 to increase the percentage of children receiving early intervention services to 2.6 percent of all children and to screen 200,000 children for developmental delays, approximately 37 percent of the state’s population age 0 to 3. In its annual performance report for that period, Illinois described the strategies it used to exceed its participation target—2.76 percent of children received services—and explained why it fell 58,000 children short of its target for screenings. In addition to information submitted as part of the annual performance reports, states also report data to OSEP in five areas: (1) number and percentage of children receiving services, (2) the specific settings in which children receive services, (3) number of children who stopped receiving Part C services and the reason for stopping, (4) number and types of services provided, and (5) the number of clinical personnel employed or contracted to provide services. IDEA requires states to submit data in the first three areas, and OSEP, under authority granted to it in IDEA, requires states to submit data in the final two areas. For future reporting periods OSEP plans to discontinue collection of personnel data because they were found to be unreliable. Additionally, OSEP will stop collecting information about the number and types of services provided. The reporting data complement and inform topics covered in the annual performance reports. OSEP uses the annual performance reports and other reporting data to identify problem areas and target its oversight efforts. In particular, OSEP compares states against the national average on two performance indicators: (1) the percentage of all infants and toddlers in the state receiving early intervention services, which was 2.2 percent as of 2003, and (2) the percentage of infants and toddlers with disabilities receiving early intervention services in a natural environment, which was 83 percent, as of 2002. These indicators were developed by OSEP with input from interested parties, including states and the Centers for Disease Control and Prevention. OSEP officials said they chose these indicators because of their confidence in the accuracy of the data and because they are closely linked to other Part C requirements. OSEP considers whether states have fallen below the national average when deciding whether to target states for technical assistance and closer monitoring. In 2003, half of all states served less than 2.2 percent of children. OSEP officials note that the indicators do not directly measure compliance with Part C, but they serve as an early warning signal that states may need assistance. OSEP relies on the first performance indicator as a measure of the level of access states are providing for early interventions and the success of efforts to identify all eligible children. It has collected this performance information since at least 1996, and the percentage of the nation’s children between birth and age 3 receiving services has steadily increased since 1998—from 1.6 percent to 2.2 percent in 2003. Twenty-five states met or exceeded this indicator in 2003. Of these 25 states, 7 served between 3.4 and 7.7 percent. The fact that half of all states served 2.2 or more percent, and some served as high as 7.7 percent, combined with the known difficulties in reaching all eligible children, suggests that the actual eligible population may be larger than the number of children states are identifying. The Centers for Disease Control and Prevention told us that comprehensive data on the number of children who could benefit from early intervention are not available. OSEP pays particular attention to states that do not meet its performance indicator. Failure to meet this indicator can be a signal that the state is not doing enough to identify all eligible children and raise public awareness of available early intervention services. First, OSEP might encourage these states to seek help from technical assistance centers or OSEP staff. States can get technical assistance on an ongoing basis through several vehicles, such as conferences, six regional centers, research and training centers, and a national center. Second, OSEP might schedule a site visit, at which it would interview state and local officials, providers, and parents and review program data in more depth. After OSEP completes a site visit, it prepares a monitoring report addressing strengths and areas of noncompliance with Part C. Using data from annual performance reports and site visits, OSEP has found states out of compliance with Part C for a number of issues related to the goal of identifying all eligible infants and toddlers for services. OSEP finds states out of compliance for, among other reasons, not making adequate public awareness efforts to inform culturally diverse groups about available early intervention services, not disseminating public awareness materials to pediatricians and other referral sources in rural areas, not referring children from underrepresented groups for services in a timely manner, and not carrying out service coordination responsibilities. Between July 1, 2002, and June 30, 2003, 14 states were found out of compliance with child identification requirements. These states served 0.9 to 7.7 percent of their population, with 9 of the 14 states serving less than 2.2 percent of their population. OSEP found Nevada (which was the state that served the lowest percentage of infants and toddlers at 0.9 percent in 2003) out of compliance for not ensuring that all children who may be eligible for early intervention services are identified, located, referred, and evaluated in accordance with Part C. Hawaii, which serves the largest percentage of children, including children at risk of having a substantial developmental delay, was found out of compliance because it lacked procedures to ensure evaluations and assessments were conducted in all the areas required by Part C. When states are not in compliance with Part C and do not show improvement in their performance, even after receiving technical assistance, OSEP has several options. Initially, OSEP might work with a state on a plan of corrective action with a timeline, or issue a letter to the state documenting the specific problems. As a last resort, OSEP can impose formal sanctions against a state, including withholding funds, referring the matter to the Department of Justice, entering into a voluntary compliance agreement with a state and its respective lead agency that sets a timeline for bringing the state into compliance, and incorporating special conditions into a state’s grant award. OSEP reports that it rarely withholds funds or refers any noncompliance issues for Part C programs to the Department of Justice. Two states, South Carolina and Arizona, are currently on compliance agreements, and several have special conditions in their grant awards. OSEP is using its second performance indicator, on the percentage of infants and toddlers with disabilities receiving early intervention services in a natural environment, in the same way it uses data about the percentage of all infants and toddlers in the state receiving early intervention services. OSEP officials told us that on the basis of provisions in the 2004 reauthorization of IDEA, they recently developed a new set of performance indicators. States will submit to OSEP baseline data on these measures in December 2005. The new indicators generally build upon data currently being collected to look in new ways at how states provide early intervention services in a natural environment, identify children, transition children to follow-on services, and address supervision and management issues. For example, the new indicators for identifying children include a comparison of the percentage of children served in each state with the average in other states with similar eligibility criteria, and information about the percentage of children who proceeded through the evaluation, assessment, and service planning stages of the early intervention system according to timelines required by Part C. Similarly, the new transitioning indicators require information about the percentage of children who receive timely transition planning. State lead agencies play a critical role in monitoring and supporting early intervention services through their responsibility for local Part C programs. Instead of directly providing services to infants and toddlers with developmental delays, in the states we visited, local and regional early intervention programs generally deliver and coordinate services. The states, then, are responsible for ensuring the local programs are in compliance with Part C. States use many of the same approaches as OSEP in monitoring and supporting local programs, such as file reviews, reporting requirements, program certification or funding awards, employing training and technical assistance staff, and monitoring visits. States frequently interact with local early intervention programs. For example, Massachusetts officials seek to visit half of their 63 local programs each year. OSEP encourages collection of outcome data from parents and is sponsoring research on outcomes, which is scheduled to be completed in 2006. At least 4 of the states we visited monitor early intervention services by conducting parent surveys. The surveys measure parental satisfaction with the delivery of early intervention services, how well parents feel services are coordinated, and parents’ experiences working with staff to transition their children to follow-on services. OSEP provides funding for technical assistance to help states develop parent surveys. These survey data and information from OSEP’s National Early Intervention Longitudinal Study are potential sources of outcome data about early intervention services. Additionally, the Early Childhood Outcomes Center, a 5-year project funded by OSEP, is providing technical assistance to support states in developing and implementing other outcome measurement systems for children with disabilities. The Early Childhood Outcomes Center is attempting to develop outcome data that can be aggregated at the national level, document program effects, and improve programs at the local and state levels. State Part C officials we spoke with explained that they have to hold local early intervention programs accountable for the same performance indicators for which OSEP holds them accountable. As with OSEP, state Part C coordinators have taken actions to enforce compliance with IDEA. Officials in Colorado said they had taken away funding from programs that failed to comply with Part C requirements. Also, when states fail to enforce IDEA requirements, they risk not only being found in noncompliance, but also lawsuits brought by individuals under IDEA. Such was the case in Hawaii and Illinois. In Hawaii, parents and mental health advocates alleged that qualified handicapped children were not receiving mental health services. In Illinois, plaintiffs alleged that the state had a waiting list for children who were eligible for services. Both states settled the lawsuits by agreeing to take specific steps to come into compliance with the act. Although the information that OSEP and the states compile has helped identify some performance problems, overseeing and coordinating children’s transitions to IDEA Part B remains a challenge. The transition process involves several sequential steps, and when any of these steps are delayed, a child could miss out on critical services and providers can be left without important information on a child’s status. As a child nears age 3, local early intervention staff must inform the child’s family about follow- on programs that may be available for the child, such as Part B Section 619. Local early intervention staff, with the approval of the family, hold a conference with the family and, if the child is potentially eligible under Part B, LEA officials, to discuss any services the child may be eligible to receive. This transition planning conference for children potentially eligible under Part B, must occur at least 90 days before the child’s third birthday. Early intervention staff and the family must develop a written transition plan. And if the child is believed eligible for Part B services, early intervention staff must notify the LEA. The LEA must determine the child’s eligibility within a reasonable time frame, and if the child is found eligible, a meeting to develop an individualized education program (IEP) for the child must be conducted within 30 days. Part B requires teachers, parents, school administrators, and related services personnel to develop the IEP shortly after a child is found eligible for Part B services, and the IEP guides the delivery of special education supports and services for a student with disabilities. While IDEA requires states and local programs to provide transition planning and follow these specific procedures, we found in our site visits that delays still happen. Education cited preliminary unpublished data that would suggest transitions are a year-round problem. We found that delays generally occur for two reasons. First, data in annual performance reports indicate that some states have difficulty scheduling transition meetings 90 days in advance of a child’s third birthday. State and local officials we interviewed said it was difficult to assemble all of the requisite individuals for the conference before the deadline. Second, some state officials expressed concern about the timing of the LEA’s decision on a child’s eligibility. The decision may be delayed until the following school year for children with summer birthdays because LEAs generally operate on a 9- or 10-month academic calendar. In 5 of the 7 states we visited, officials said that some children who turn 3 during the summer and are eligible for Part B preschool experience service gaps when school is not in session. As a result of these delays in the transition process, some children who need extended school year services during the summer may not receive them. Most of the states we visited do not keep track of the number of eligible children who do or do not receive extended school year services. There are two potential ways to ensure children do not experience gaps in services. First, extending Part C services until children are eligible to enter kindergarten, which was permitted for the first time with the reauthorization of IDEA in 2004, could mitigate some of the challenges associated with transitioning children. However, none of the states we visited plan to exercise this option. States indicated that it would be too costly for them to extend Part C service and that Part B officials are not willing to support doing so with Part B Section 619 funds. Second, Part C funds can be used to provide services to children from their third birthday to the beginning of the following school year, but an OSEP technical assistance center reports that as of 2004, while 30 states permit such use of Part C funds, only 14 states have adopted such a policy. In addition to citing delays, state and local officials cited other obstacles to a smooth transition for children. Local early intervention programs sometimes have to work with multiple LEAs that each have their own eligibility criteria for Part B, which complicates coordination. For example, a local Massachusetts official said that her early intervention program spans a geographical area that encompasses 13 different LEAs. Also, LEAs sometimes conduct their own evaluations, contributing to the time needed for determining Part B eligibility. State and local officials also reported that early intervention programs often do not get final notification of a child’s eligibility for Part B services from the LEA. According to OSEP, this information exchange may not occur for several reasons, including federal laws relating to privacy and the need for parental consent to share results of Part B evaluations. Without access to information on eligibility decisions, early intervention staff do not know whether they need to refer children who are denied Part B services to other follow-on programs, like Head Start. State Part C officials are required to report Part B eligibility information to OSEP when reporting why a child stopped receiving services, but LEAs that administer Part B do not always provide this information in a timely manner, if at all. While two of the states we visited are in the process of developing mechanisms for ensuring early intervention staff have access to eligibility information, none are currently in use. OSEP staff acknowledged that states need continued support to ensure Part B officials share eligibility information with early intervention staff. Scientific research suggests that the earlier a child with disabilities gets intervention services, the more effective these services may be in enhancing a child’s development. Before a child enters preschool, states have substantially greater flexibility in determining which infants and toddlers to serve. IDEA gives states the freedom to set different eligibility criteria for early intervention services and decide how they will evaluate children for eligibility. However, it is partly these variations that make it difficult to determine if states are actually meeting the early intervention needs of all their developmentally delayed infants and toddlers. One of the most pressing challenges is transitioning young children with disabilities from services provided under IDEA Part C to Part B preschool or other services at age 3. This transition requires that a sequence of determinations and agreements among multiple stakeholders take place in a timely way. Education reported in its comment that it has preliminary data that suggest that service gaps may occur whenever children transition. In our interviews with state and early intervention officials, we found that transition is perhaps most challenging for children who transition during the summer months. If determination of eligibility for Part B is delayed, children can be prevented from receiving necessary services, including those provided through extended school year programs in the summer. Based on our findings, and Education’s preliminary findings from its ongoing study of preschool services, it appears that without additional guidance, some children exiting the Part C program and eligible for Part B preschool may not receive all the services for which they are eligible. In order to assist states in providing a more seamless transition for children with disabilities from IDEA Part C to Part B, or other preschool programs, we are recommending that the Secretary of Education provide states with additional guidance on transition planning and services for children with birthdays during the summer, and especially in cases where children are likely to need extended school year services. Additionally, after Education completes and verifies the results from its ongoing studies relating to transitioning, that information should be used to inform the department’s guidance to states on transition planning. We provided a draft of this report to Education for review and comment. Education disagreed with the recommendation we made to incorporate into its research agenda a method for determining how frequently children transitioning from Part C to Part B do not receive services during the summer months, and if gaps in services are found to be a problem, provide states with additional guidance on improving children’s access to extended school year services. Education noted that preliminary and unpublished data from a department study indicate that gaps occur when children are transitioned from Part C to Part B, not only during the summer, but whenever transitions occur. Additionally, Education stated that based on its preliminary data, there is no need to study extended school year service. We believe it is critical to provide children with the services they need when they need them. If Part B eligibility is not determined prior to children turning 3 during the summer months, then related decisions, including those about extended school year services, cannot be made. We believe that by providing additional guidance, Education can help states improve transition planning and services and help ensure that children do not experience gaps in services during critical periods of their development. Education also provided technical comments that we incorporated into the report where appropriate. Education’s written comments are reproduced in appendix II. We will send copies of this report to the Secretary of Education, appropriate congressional committees, and others who are interested. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please call me at (202) 512-7215. Key contributors are listed in appendix III. The Child Care and Development Block Grant program is a discretionary fund program that, among other things, supports state efforts to provide child care to parents trying to achieve independence from public assistance. Children with Special Health Care Needs refers to a type of program operated by particular states that provides financial assistance or case management for needed medical treatment to children with serious and chronic medical conditions to reduce complications and promote maximum quality of life. Developmental Disabilities Services refers to state programs that serve and support individuals with mental retardation/developmental disabilities and their families, including early intervention services. For example, community developmental disability services are supported by state funding in Kansas, which defines community developmental disability services as those designed to meet needs associated with work, living in the community, and individualized supports and services. Head Start and Early Head Start are comprehensive child development programs that serve children from birth to age 5, pregnant women, and their families. These programs are federally funded and locally administered by community-based nonprofit organizations and school systems. Grants are awarded by the Department of Health and Human Services. IDEA Part B, administered by the Department of Education, provides grants to states to provide preschool services to children with disabilities from age 3 to 5. The Maternal and Child Health Services Block Grant program (Title V of the Social Security Act) provides federal grants to states and organizations with the aim of improving the health of mothers and children. Among the many services supported by grants are support programs for children with special health needs, care coordination, transportation, home visiting, and nutrition counseling. Medicaid is health insurance that helps people who cannot afford medical care pay for some or all of their medical bills. Medicaid is jointly funded by the federal and state governments to assist states in furnishing medical assistance to eligible needy persons. The Social Services Block Grant (SSBG) program allocates federal funds to states to support a wide variety of social services programs for adults and children. Temporary Assistance for Needy Families (TANF) is a family assistance block grant from the Department of Health and Human Services to states that can be used to provide monthly cash assistance payments to families as well as to finance services for TANF clients or other low- income people to support their efforts to work. Tobacco Funds were awarded to states as part of a settlement agreement with major tobacco companies. Kentucky designated 25 percent of its Phase I settlement to an early childhood initiative that includes First Steps, its early intervention system. Kansas allocated all of its settlement for children’s services. TRICARE is the Department of Defense’s regional managed-care program for delivering health care to members of the armed services and their families, survivors, and retired members and their families. TRICARE operates like health maintenance organization plans offered in the private sector and other similar health insurance programs. In addition to the contact named above, the following individuals made important contributions to this report: Betty Ward-Zukerman, Assistant Director; Ramona Burton, Analyst-in-Charge; Daniele Schiffman, Analyst; Rachael Chamberlin; Sherri Doughty; Avrum Ashery; Jonathan McMurray; Beverly Ross; and Daniel Schwimer.
Part C of the Individuals with Disabilities Education Act (IDEA) was established to ensure that infants and toddlers with disabilities, from birth to age 3, and their families receive appropriate early intervention services. Within the Department of Education (Education), the Office of Special Education Programs (OSEP) is responsible for awarding and monitoring grants to states for Part C according to IDEA requirements. To address questions about how states have implemented IDEA Part C, this report provides information on (1) how Part C programs differ in their eligibility criteria and whom they serve, (2) to what extent states differ in their provision of services and funding, and (3) how Education and state lead agencies help support and oversee efforts to implement Part C, such as identifying children for services and transitioning children to follow-on programs, such as IDEA Part B. Eligibility criteria for Part C services for infants and toddlers with disabilities differ from state to state, but do not consistently explain the percentage of children served, which ranges between 1.3 and 7.1 percent. To determine eligibility, most states measure how much the child is delayed in one or more areas of early childhood development, while a few rely exclusively on a clinical team's judgment. Although IDEA Part C is intended to cover children from birth to age 3, most states provide the majority of their Part C services to children 2 to 3 years old. States have public awareness campaigns to identify more eligible infants and toddlers but cite a number of obstacles, including difficulty reaching children in rural areas or in families where English is a second language. The states we visited provide a similar set of services but vary in funding sources. States are required to make available certain early intervention services under IDEA, such as occupational, physical, and speech therapy. However, states report challenges recruiting and retaining professionals, such as speech language pathologists, to provide these services. States rely on various funding sources, but state general revenue funds were generally the largest source of early intervention funding. OSEP and state lead agencies have provided training and technical assistance and used data to monitor implementation of IDEA Part C, but OSEP has lacked some information from local officials needed to determine if children are smoothly transitioning from Part C to Part B. OSEP uses annual reports and performance indicators as part of its effort to monitor compliance with Part C and target technical assistance. For example, data on the percentage of children served help inform OSEP of states' efforts to identify all eligible children. States use similar approaches. Despite these activities, state officials cited challenges transitioning children to Part B services when they turn 3 years old. Education indicated that in preliminary and unpublished data from an ongoing study it had found that gaps occur throughout the year. Officials in the states we visited reported that some children who turn 3 during the summer experience gaps in service. If Part B eligibility is not determined prior to children turning 3 during the summer, then subsequent decisions about whether children should receive extended school year services cannot be made.
The primary purpose of PME is to develop military personnel, throughout their careers, for the intellectual demands of complex contingencies and major conflicts. The military services provide PME at their respective staff and war colleges. Each service educates service members in their core competencies according to service needs. Air Force colleges, for example, primarily teach air and space warfare. Similarly, Army, Navy, and Marine Corps colleges focus on land, maritime, and expeditionary warfare, respectively. DOD depends on the services’ PME institutions to develop personnel with these service-specific skills. However, the JPME program places emphasis on preparing leaders to conduct operations as a coherently joint force in complex operating environments. Following the passage of the Goldwater-Nichols Act (the Act) in 1986, DOD developed JPME as a subset of learning within the PME program, to comply with the “joint” requirements outlined in the Act and subsequent legislation. Currently, JPME is provided at multiple sites across the country, including the services’ staff and war colleges and NDU. Together, PME and JPME, prepare service members in successive stages throughout their careers to engage intellectual challenges appropriate to increases in their ranks and responsibilities. See figure 1 for a map of service and joint colleges and universities where JPME is provided. The military services are primarily responsible for overseeing PME at their respective staff and war colleges. As part of their oversight efforts, the military services’ leader-development efforts are included in education programs. For example, the Army’s Training and Doctrine Command serves as the executive agent for ensuring leader development is integrated into PME courses at the Army War College and the Army Command and Staff College. In contrast, JPME is overseen by the Joint Staff. The Joint Staff is responsible for developing the learning objectives for JPME and for accrediting the service staff and war colleges and the joint institutions to provide JPME coursework. The Joint Staff also has oversight responsibility of NDU. The Chairman of the Joint Chiefs of Staff is statutorily responsible for formulating policies for coordinating the military education and training of members of the armed forces.Military Education Coordination Council, which consists of representatives from the Joint Staff, the service and joint colleges and universities, and other JPME-accredited institutions, serves as an advisory body to the Joint Staff on joint education issues. The purpose of the council is to address key educational issues of interest to the joint educational community, promote cooperation and collaboration among the colleges and universities certified to grant JPME degrees, and coordinate joint education initiatives. The Joint Staff conducts periodic assessments of the three statutorily mandated levels of officer JPME to ensure that the curricula being taught at service staff and war colleges and the joint institutions meet the prescribed joint educational requirements outlined in the Officer Professional Military Education Policy. JPME program includes curriculum components that JPME colleges and universities should follow to develop the knowledge, analytical skills, perspectives, and values that are essential for U.S. servicemembers to function effectively in joint, interagency, intergovernmental, and multinational operations. Moreover, Enclosure A of the policy states that senior officer studies at JPME-degree-granting colleges and universities should emphasize analysis, foster critical examination, and provide progressively broader educational experiences. CJCSI 1800.01D. subject-matter experts in their field of study. Table 1 provides a list of the 20 research institutions that are associated with JPME colleges and universities and are within the scope of our review. Other DOD-funded organizations also conduct studies and analysis research. For example: Federally Funded Research and Development Centers, such as the Center for Naval Analyses and the Institute for Defense Analyses, maintain capabilities to conduct research in core competencies in areas of importance to DOD, such as analysis, acquisition support, and research and development. According to a May 2011 memorandum from the Under Secretary of Defense for Acquisition, Technology and Logistics, the mission of Federally Funded Research and Development Centers is to provide DOD with unique capabilities in many areas where the government cannot attract and retain personnel in sufficient depth and numbers. The memorandum further explains that Federally Funded Research and Development Centers are a vital component of the department’s overall acquisition workforce because they operate in the public interest, free from organizational conflicts of interest. Service-affiliated organizations, such as the Naval Postgraduate School and the Center for Army Analysis, provide research products to their parent services to help with decision making and analysis on critical issues facing the service. DOD’s Regional Centers for Security Studies support DOD’s objective to build the defense capacity of partner nations. In our prior work, we reported that the Regional Centers’ activities include education, exchanges, research, and information sharing. For example, the George C. Marshall European Center for Security Studies conducts research on European security issues relevant to U.S. interests. JPME research institutions receive funding through their colleges and universities and other departmental offices for their operations, which include research activities. Specifically, most JPME colleges and universities receive direct funding from their respective military service to fund their PME and JPME programs, and some of those resources are used to fund their JPME research institutions. For example, the Naval War College receives operation and maintenance and military personnel funding from the Department of the Navy as well as funds in the form of monetary gifts from the Naval War College Foundation. In turn, the college allocates some of those resources to fund its associated research institutions, such as the China Maritime Studies Institute. However, NDU receives operation and maintenance funding from defense-wide appropriations for its JPME program and research institutions. With these funds, JPME research institutions can support PME and JPME programs as well as the research needs of those entities that provide their funding. For example, the Air University’s Center for Strategy and Technology produces research that is responding to key questions and topics of interest posed by the Chief of Staff of the Air Force. Additionally, some JPME research institutions receive funding on a reimbursable basis from other departmental offices, such as the directorates within the Office of the Secretary of Defense, the Joint Staff, and the military services. These offices receive their own funding, such as research, development, test, and evaluation funds and operation and maintenance funds, which may be used in part to fund annual requirements for research projects. To fulfill these annual research requirements, funding may be allocated to JPME research institutions, Federally Funded Research and Development Centers, or think tanksconduct individual research projects in support of those offices’ annual research requirements. For example, NDU’s research institutions have received funding from the Office of the Secretary of Defense to study and build subject-matter expertise on issues related to Afghanistan and Pakistan. DOD also funds science and technology–related research. According to testimony in April 2013 by Acting Assistant Secretary of Defense for Defense Research & Engineering, science and technology research is funded to mitigate new or emerging capabilities that could degrade U.S. capabilities, enable new or extended capabilities in existing military systems, and develop new concepts and technologies through science and engineering applications to military problems. Science and technology research is conducted under the auspices of the Office of the Assistant Secretary of Defense for Research and Engineering. According to DOD guidance, this office develops the strategies and supporting plans for utilizing technology to respond to DOD needs and ensures U.S. technological superiority. The office is the executive secretary for DOD’s Research and Engineering Executive Committee. This committee brings together leadership from the DOD components that have science and technology research investments for the purpose of strengthening coordination and enhancing the efficiency of research and engineering investments in areas that cannot be addressed adequately by any single component. Science and technology research comprises basic research, applied research, and advanced technology development.research institutions do not conduct science and technology research. Science and technology research is generally conducted by DOD laboratories associated with the military services, such as the Army Research Laboratory, and some Federally Funded Research and Development Centers, such as the Massachusetts Institute of Technology Lincoln Laboratory, among other organizations. JPME research institutions, particularly at NDU, experienced considerable growth in number, funding, and size in terms of staffing levels from fiscal year 2007 through fiscal year 2011 but have declined over the past 2 years. Several factors contributed to JPME research institution growth, including increases in reimbursable funding from outside offices sponsoring JPME research, the creation of new research institutions, and the realignment of institutions at some JPME colleges and universities. While a variety of factors contributed to the expansion of JPME research institutions, it has primarily been department-wide budget reductions that contributed to their decreases in number, funding, and size since 2011. JPME research institutions, particularly at NDU, experienced considerable growth in number, funding, and size in terms of staffing levels from fiscal year 2007 through fiscal year 2011, but have declined over the past 2 years. The following sections discuss overall trends in the number of research institutions at JPME colleges and universities from fiscal year 2007 through fiscal year 2013, as well as overall trends in funding and staffing levels for this same period. Appendix II provides more-detailed information for each of the 20 JPME research institutions from fiscal year 2004 through fiscal year 2013, as available. From 2007 through 2011, the number of JPME research institutions grew from 14 to 20. During this period, the number of research institutions at NDU increased by 3. At Marine Corps University and Air University, the number of research institutions increased by 2 and 1, respectively. Since 2011, however, the number of research institutions has slightly declined due to the disestablishment of the Center for Transatlantic Security Studies at NDU in 2012. Figure 2 shows the total number of JPME research institutions for fiscal years 2007 through 2013. Funding for research institutions at JPME colleges and universities experienced growth from fiscal year 2007 through 2011. Specifically, total funding for JPME research institutions increased from about $31.0 million in fiscal year 2007 to about $47.7 million in fiscal year 2011. Much of the growth took place at NDU, where research institutions’ total funding increased by about 78 percent. Other JPME colleges and universities also experienced considerable increases in funding for the operation of their associated research institutions. For example, with the establishment of the Middle East Studies institute and the Center for Advanced Operational Culture Learning’s Translational Research Group in 2007 and 2010, respectively, funding for Marine Corps University’s research institutions increased from $156,000 in fiscal year 2007 to about $4.9 million in fiscal year 2011. Since fiscal year 2011, funding for JPME research institutions decreased overall. Specifically, total funding for JPME research institutions fell by about 15 percent from fiscal year 2011 through 2013, from about $47.7 million to about $40.6 million. Much of the decline reflects decreases at NDU, where research institutions experienced a 21 percent decrease in total funding from about $21.4 million in fiscal year 2011 to about $16.8 million in fiscal year 2013. The Army Command and General Staff College’s Combat Studies Institute and the Center for Army Leadership also experienced considerable declines during this period, as total funding for both decreased by about 19 percent. Figure 3 provides total funding for JPME research institutions by JPME college and university for fiscal years 2007 through 2013. Staffing levels at JPME research institutions also increased considerably from fiscal year 2007 through fiscal year 2011. Specifically, staffing levels, in terms of full-time equivalents, increased from 207 in fiscal year 2007 to 384 in fiscal year 2011, about an 86 percent increase. In particular, total staffing levels at NDU’s research institutions increased by about 58 percent during this period while other JPME colleges and universities also experienced growth in staffing levels. For example, total staffing levels at Air University’s research institutions increased from 19 to 97. Since 2011, total staffing levels at JPME research institutions decreased from 384 to 310, about a 19 percent decrease. Much of the decrease is the result of a decline in staffing levels at NDU, where research institutions experienced a 31 percent decline during this period. Figure 4 shows staffing levels for JPME research institutions by JPME college and university for fiscal years 2007 through 2013. Several factors contributed to JPME research institution growth from fiscal year 2007 through fiscal year 2011, including increases in reimbursable funding provided by outside offices sponsoring JPME research, the creation of new research institutions, the realignment of institutions such that they were incorporated into JPME colleges and universities, and an increase in resources dedicated to research at some JPME colleges and universities. According to DOD officials, these increases occurred within the context of the then-ongoing operations in Iraq and Afghanistan, for which the Joint Staff and the military services expected the JPME research institutions to provide increased support to the warfighter. In particular, these factors led to an expansion at NDU’s research institutions during this period. For example, NDU’s research budget grew primarily due to increases in reimbursable research funded by outside offices such as the Office of the Secretary of Defense and the Joint Staff. Reimbursable funding provided to NDU’s research institutions increased from about $5.6 million in 2007 to about $14.6 million in fiscal year 2011, as shown in figure 5. Specifically, reimbursable funding for NDU’s Center for Technology and National Security Policy’s research increased from about $3.8 million in fiscal year 2007 to about $6.9 million in fiscal year 2011. Additionally, funding for NDU’s Center for Study of Weapons of Mass Destruction, which is funded entirely on a reimbursable basis, more than doubled from $1.5 million in fiscal year 2007 to $3.1 million in fiscal year 2011. NDU officials attributed the growth in reimbursable funding to the fact that research had evolved into a key area of emphasis for the university. These officials also noted that, as a result, faculty members at the research institutions were encouraged to pursue research directly funded by other DOD entities and other U.S. government agencies. To meet the increased demands for reimbursable research, particularly when in-house expertise did not exist, NDU increased the number of contractor and noncontractor researchers at its research institutions.noted that because a significant portion of the NDU workforce, including that of its research institutions, is made up of senior-level positions, researcher salaries contributed to the growth in NDU’s research budget. We also found that funding for NDU’s research institutions increased as the result of the transfer of research institutions to NDU as well as the broadening of missions of other research institutions. For example, according to DOD officials, in an effort to better deliver education to DOD and other U.S. government personnel on issues related to ongoing operations in Iraq and Afghanistan, such as irregular warfare, counterinsurgency, and stability and reconstruction operations, the Center for Complex Operations was transferred from the Defense Security Cooperation Agency to NDU in 2009. Furthermore, funding for the Center for the Study of Weapons of Mass Destruction at NDU increased due to DOD’s decision to broaden the Center’s counterproliferation focus government-wide. Moreover, the Chairman of the Joint Chiefs of Staff designated the center as the focal point for weapons of mass destruction education for JPME. As a result, the Joint Staff began to fund the center in 2008 to perform that mission. Other JPME colleges and universities also experienced considerable increases in research institution funding and staffing levels from fiscal year 2007 through fiscal year 2011 due to factors such as the creation of new research institutions and the realignment of others within JPME colleges and universities. For instance, according to Air Force documentation and officials, Air University’s increase in staffing levels for its research institutions can largely be attributed to the establishment of the Air Force Research Institute in 2008. According to an Air University official, the establishment of the Air Force Research Institute resulted from the consolidation and realignment of personnel from existing Air Force institutions as well as the creation of 18 new positions. Air Force officials also stated that staffing levels at the Air Force Counterproliferation Center and the Air Force Center for Strategy and Technology increased due to increased research requests from the Air Staff on nuclear and strategic-level research projects, respectively. Marine Corps University’s increase in staffing levels at its three research institutions can be attributed to the establishment of the Translational Research Group in 2010 and increases in staff positions within the History Division and Middle East Studies center. While a variety of factors contributed to the expansion of JPME research institutions, it has primarily been department-wide budget reductions, including the implementation of sequestration in fiscal year 2013, that contributed to their decreases in number, funding, and size. For example, officials stated that decreases in funding for NDU’s research institutions and staffing levels resulted from overall reductions at the university due to declining budgets. Furthermore, according to officials, NDU’s budget for its research institutions came under increased scrutiny in 2011 with the issuance of a new mission statement for NDU by the Chairman of the Joint Chiefs of Staff that prioritized research that more-directly supported education over reimbursable research. Moreover, according to officials, the Joint Staff established full-time equivalent caps for both direct and reimbursable funding in 2012. As a result of actions taken to reduce NDU’s budget, NDU’s Center for Transatlantic Studies was disestablished in September 2012, and the decision was made in early 2013 to defund the Conflict Records Research Center beginning in fiscal year 2014. Similarly, DOD-wide budget reductions contributed to decreases in the funding and size of other JPME research institutions. For example, Naval War College officials stated that its research institutions absorbed a majority of the college’s budget cuts since fiscal year 2011 because the college prioritized funds to support its principal education mission. Air University has also experienced decreasing budgets and staffing level reductions beginning in fiscal year 2011 through fiscal year 2013. Specifically, we found that the Air Force Research Institute’s total staffing levels decreased from a high of 81 in fiscal year 2011 to 61 in fiscal year 2013 as a result of overall reductions in Air Force civilian personnel. The extent to which DOD can assess the performance of JPME research institutions is limited by the lack of a comprehensive framework to systematically assess their performance in meeting PME and other departmental goals and objectives. With limited exceptions, the JPME colleges and universities, which have broad latitude in overseeing their associated research institutions, have not consistently established measurable goals or objectives linked with metrics to assess the performance of their associated research institutions. However, best practices state that achieving results in government requires a comprehensive framework that includes measurable goals and objectives and metrics for assess progress, consistent with the framework identified in the Government Performance and Results Act. Further, while there are mechanisms in place for overseeing JPME colleges and universities, such as the Joint Staff’s JPME accreditation process, these are focused on the quality of academic programs and not on the research institutions’ performance. There is no DOD-wide guidance that addresses the intended role of the research institutions in supporting PME or other departmental goals or assigns responsibilities for conducting reviews of them, leaving the department without a basis to assess the institutions’ stated mission and actual performance against planned or expected results. This is inconsistent with the Standards for Internal Control in the Federal Government, which state that agencies should conduct reviews by management at the functional or activity level, which in this case would be the JPME research institutions, and compare actual performance to planned or expected results. Clearly establishing linkages between significant activities, their intended role in meeting agency-wide goals and objectives, and assigning oversight responsibilities underpins an agency’s ability to conduct such reviews. According to officials representing the Joint Staff and JPME colleges and universities, DOD has provided JPME colleges and universities with broad latitude in overseeing their associated research institutions. In doing so, the Joint Staff and the military services have not provided guidance to assist the JPME colleges and universities in developing a comprehensive oversight framework for assessing the performance of JPME research institutions. As a result, we found that JPME colleges and universities have not consistently established measurable goals and objectives linked with performance metrics to assess the performance of their associated research institutions and therefore are unable to comprehensively assess their performance to determine whether they are furthering JPME or other departmental goals. According to best practices, achieving results in government requires a comprehensive oversight framework that includes measurable goals and objectives, and metrics for assessing progress, consistent with the framework identified in the Government Performance and Results Act. In April 2012, the Joint Staff conducted a management control review of NDU, the purpose of which was to assess the administrative and fiscal control processes that were in place to ensure proper stewardship of NDU’s resources. As part of that review, the Joint Staff noted that throughout NDU there appeared to be a fundamental disagreement regarding how its research supported the JPME mission and courses. Accordingly, the Chairman of the Joint Chiefs of Staff provided NDU with a new mission statement emphasizing that its research should support its academic mission. teaching faculty to bring together expertise in national security studies. The NDU President approved the strategic plan for research in January 2014. We identified other examples where JPME colleges and universities identified broad goals and objectives for research. However, the linkage between these goals and objectives and JPME research institutions was unclear as the goals and objectives are not specifically assigned to associated JPME research institutions. For example, the Naval War College’s strategic plan contains a guiding principle to keep the college’s research and scholarly activities relevant to the needs of the Navy and the nation. Similarly, Marine Corps University’s strategic plan contains a goal related to strengthening professional scholarship and outreach. However, neither of these goals makes reference to the college’s or university’s research institutions. Additionally, JPME colleges and universities, such as Air University and Army War College, have developed lists of research priorities on an annual basis. According to officials, these lists are developed to reflect the priorities of senior leadership within their service and have been used to guide the research activities of JPME students. In 2013, the Army War College completed a strategic review of its academic programs and, as a result of this review, has aligned the development of its Key Strategic Issues List with a specific strategic goal of influencing national security decision-making. However, the Army War College has not clearly linked its strategic issues list with the education goals of the college. Furthermore, we found that JPME colleges and universities have not consistently established metrics to assess the performance of JPME research institutions in meeting PME or other departmental needs. Based on our review, we identified some examples where JPME colleges and universities had established performance metrics for their associated research institutions. For example, Air University established a performance measure for the Air Force Research Institute that includes a count of the requested versus the delivered research studies for senior Air Force staff. Similarly, Marine Corps University established several measures to assess the progress its research institutions have made in achieving desired outcomes. For example, the university established a measure for the History Division intended to assess its responsiveness to research inquiries. Officials from JPME colleges and universities, including JPME research institutions, told us that they recognize the need to establish measures for assessing the research institutions’ performance. They explained, however, that they have faced difficulties in developing them for research institutions. For example, officials representing JPME colleges and universities stated that it is challenging to compile quantitative data that represent the value or the usefulness of research. Although we recognize that it is difficult to establish performance measures for outcomes that are not readily observable or in some cases systematic, the department does use metrics to assess the performance of other DOD-funded organizations that conduct studies and analysis research. For example, DOD guidance directs organizations that sponsor a Federally Funded Research and Development Center to assess their performance. According to the guidance, sponsoring organizations must develop procedures to annually monitor the value, quality and responsiveness of their work. For instance, officials we spoke with within the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics stated that their office compiles data on metrics for the RAND National Defense Research Institute—a Federally Funded Research and Development Center—that are primarily based upon quantitative ratings and comments gathered from surveying organizations that contracted for research projects with it. For those research projects that received low ratings or negative comments, the RAND National Defense Research Institute is required to follow up with sponsors to understand what had happened and provide a plan for corrective actions. Without a framework that includes measurable goals and objectives linked with metrics, DOD, including its JPME colleges and universities, does not have a systematic basis to comprehensively assess the performance of JPME research institutions. Best practices state that a framework that consists of measurable goals and objectives linked with metrics for assessing progress would better enable DOD to determine whether JPME research institutions are achieving results. Moreover, it would provide the DOD with a sounder basis for making resource determinations to ensure that these research institutions are furthering JPME and other departmental goals and that JPME continues to provide servicemembers with the expertise necessary for their careers. Standards for Internal Control in the Federal Government state that agencies should conduct reviews by management at the functional or activity level and compare actual performance to planned or expected results. Clearly establishing linkages between significant activities, their intended role in helping meet agency-wide goals and objectives, and assigning oversight responsibilities underpins an agency’s ability to conduct such reviews. According to these standards, such controls are an integral part of an agency’s planning, implementing, reviewing, and accountability for stewardship of government resources and achieving effective results. However, the oversight conducted by the Joint Staff and by external accrediting bodies reviews the quality of JPME academic programs and not the JPME research institutions’ performance. For example, the Joint Staff’s Process for Accreditation of Joint Education process is DOD’s primary mechanism of oversight, assessment, and improvement of JPME academic programs. The instruction governing this process lays out seven educational standards common to all PME colleges and universities, including JPME colleges and universities, which the Chairman of the Joint Chiefs of Staff considers essential. Officials from the Joint Staff JPME Division stated that this office conducts reviews of the JPME colleges and universities every 6 years to determine how well their academic programs are meeting these education standards. Although the Chairman of the Joint Chiefs of Staff is statutorily responsible for overseeing the officer joint education system, the Joint Staff instruction that serves as the primary guidance for JPME-related policy does not address what the intended role of research institutions should be at JPME colleges and universities and it does not assign responsibilities for conducting oversight of their activities. Without clarity as to the intended role of research institutions in support of JPME academic programs or for another purpose, there is no basis by which to compare the research institutions’ respective stated missions as well as actual performance to planned or expected results, as required by Standards for Internal Control. Specifically, the instruction contains standards to evaluate the quality of JPME academic programs, but it does not define the role of JPME research institutions and it contains no educational standards, learning areas, or specific objectives for research that would enable DOD to assess the performance of research institutions at JPME colleges and universities. Further, no organization is assigned specific responsibility for overseeing the performance of the research institutions. Joint Staff officials agreed that their accreditation process is focused specifically on JPME academic programs and pertains only to academic curricula development and quality and the assurance of uniformity in course content across the different JPME colleges and universities. As a result, JPME research institutions are not reviewed as part of the Joint Staff’s accreditation process. Moreover, according to Joint Staff officials, while they are statutorily responsible for overseeing the quality of JPME academic programs, there is no statutory responsibility for the Joint Staff to oversee the performance of JPME research institutions. One JPME official with whom we met noted that if the department had goals, objectives, and performance measures for JPME research institutions as part of the instruction governing the Joint Staff accreditation process, it would strengthen the department’s oversight process for JPME research institutions. In the absence of DOD-wide guidance that defines the role of research institutions as part of the JPME system and establishes roles and responsibilities for conducting oversight of JPME research institutions, the department and JPME colleges and universities cannot systematically assess the performance of JPME research institutions and whether they are furthering JPME. In addition to the Joint Staff’s accreditation process, oversight of JPME academic quality is performed by external accrediting bodies. Accreditation is a means of self-regulation and peer review to ensure agreed upon standards are met. The regional accreditation process is intended to examine academic institutions as a whole. While the accreditation process may review the extent to which research is conducted at a JPME college or university among a number of other activities, these evaluations are not intended to assess the performance of JPME research institutions in meeting JPME and other departmental goals and objectives. JPME colleges and universities, as Master’s Degree–granting institutions, are accredited by the following four regional accreditation bodies: Middle States Commission on Higher Education accredits NDU and the Army War College; Higher Learning Commission accredits the Army Command and New England Association of Schools and Colleges accredits the Naval War College; and Southern Association of Colleges and Schools Commission on Colleges accredits Air University, including its Staff and War Colleges, and Marine Corps University, including its Staff and War Colleges. JPME colleges and universities are subject to the regional accreditation processes every 10 years and these processes are intended to strengthen and sustain the quality and integrity of higher education. For example, according to the Middle States Commission on Higher Education, accreditation by the commission is based on the results of institutional reviews by peers and colleagues and attests to the judgment that the institution has met certain criteria, such as that it has a mission appropriate to higher education; it is guided by well-defined and appropriate goals, including goals for student learning; and it has established conditions and procedures under which its mission and goals can be realized. While the accreditation process reviews the quality at JPME colleges and universities, it does not specifically assess the performance of JPME research institutions. Our review of reports prepared through the regional accreditation process as well as interviews with JPME officials knowledgeable about the accreditation processes confirmed that their reviews are generally focused on the curriculum of JPME academic programs and not the performance of JPME research institutions. Therefore, the accrediting processes also do not provide DOD or JPME colleges and universities with a means for evaluating the performance of JPME research institutions and whether they are furthering JPME and other departmental goals. DOD does not formally coordinate requests for studies and analysis research conducted by JPME research institutions and other DOD-funded research organizations, even though many of these organizations have missions to conduct work in similar topic areas. Our analysis found that multiple organizations, including JPME research institutions and other DOD-funded research organizations, such as Federally Funded Research and Development Centers, have missions to conduct work in similar topic areas. However, DOD relies on a variety of separate processes to manage research requests that can be conducted at either JPME research institutions or other DOD-funded research organizations. Specifically, offices within the Office of the Secretary of Defense and the military departments have their own separate internal processes to request such research. Because there is no requirement for them to do so, these offices do not have mechanisms in place to participate in one another’s processes, thereby limiting opportunities to share information on DOD-wide priorities and collective research efforts, and to identify any areas of potentially similar research. Although there are notable differences even among the JPME research institutions and other DOD- funded organizations that have missions to conduct work in similar topic areas, we note that, as we concluded in September 2009, organizations involved in similar missions should coordinate to avoid unnecessary duplication of work. Furthermore, results-oriented management practices call for establishing a means to operate across organizational boundaries to enhance and sustain coordination. Although we did not identify specific instances of duplication through our analyses of mission statements and the research project titles of 20 JPME and 14 other DOD-funded research institutions, we identified similarities in their research topic areas. Through our analysis of the mission statements, we identified multiple instances in which several DOD research organizations conduct work in similar topic areas. For example, we found that 11 JPME research institutions, 5 Federally Funded Research and Development Centers, 3 regional centers and 2 service-affiliated research organizations have missions to conduct research related to DOD strategy, policy, and doctrine; 5 JPME research institutions, 1 Federally Funded Research and Development Center, and 3 regional centers have missions that include researching issues related to civilian-military issues and irregular warfare; and 2 JPME research institutions, 5 Federally Funded Research and Development Centers, and 4 service-affiliated research organizations have missions that include researching technology, acquisition, and systems issues. Our analysis of mission statements also identified instances in which more-limited numbers of DOD research organizations conduct work in similar topic areas. For example, we found that 2 JPME research institutions and 1 service-affiliated research organization have missions that include researching issues related to cyber issues; and 2 JPME research institutions and 2 regional centers have missions that include researching issues related to Africa. Figure 6 summarizes the results of our analysis of similarities in research topic areas for the 20 JPME research institutions and 14 other DOD- funded research organizations, according to 23 areas of concentration.A checkmark indicates a research institution’s mission statement identified that category is a topic area in which it conducts research. The similarities among the DOD research organizations are also illustrated in the titles of the research projects conducted by JPME research institutions and other DOD-funded research organizations. By categorizing 2,217 research project titles provided to us for 2012 and 2013 from both JPME research institutions and other DOD-funded research organizations, we found that multiple organizations’ project titles were grouped in related topic areas. For example, project titles from 13 JPME research institutions, 5 Federally Funded Research and Development Centers, and 2 regional centers were related to the Middle East; project titles from 11 JPME research institutions, 5 Federally Funded Research and Development Centers, 2 regional centers, and 1 service-affiliated research organization concerned Asia studies; and project titles from 10 JPME research institutions, 5 Federally Funded Research and Development Centers, 1 regional center, and 1 service- affiliated research organization concerned force structure and operational issues. Our analysis also identified limited instances of similarities of specific research project titles within topic areas. For example, we identified four research project titles that focused on the “Arab Spring” and two research project titles specifically related to China’s development-assistance efforts. However, given our objective’s focus on research organizations as opposed to projects, we did not review the content of individual research projects and their respective methodologies. As a result, we did not assess the extent to which individual research projects and their findings overlapped or were duplicative with other research projects. Appendix III provides more-detailed results of our analysis of research project titles conducted by JPME research institutions and other DOD-funded research organizations in 2012 and 2013. While there are similarities in the research topic areas of JPME research institutions and other DOD-funded research organizations, DOD officials also identified notable differences among these organizations. One such difference is that some JPME research institutions are required to support the PME mission at their respective colleges and universities, whereas that is not part of the mission of other DOD-funded research organizations. For example, the mission statement of NDU’s Institute for National Strategic Studies, which comprises its research institutions, currently includes advancing the strategic thinking of NDU and the JPME community through research. Also along these lines, Air University’s JPME research institutions seek to use research publications to enhance strategic thought within the Air Force and in Air University academic curricula. To carry out their mission to support JPME academic programs, the research institutions engage in efforts not required of other-DOD funded research organizations. For example, officials at Air University noted that the research products developed by the Counterproliferation Center are used to update Air Force PME curriculum. As another example, faculty from Naval War College JPME research institutions teach elective courses in the JPME academic program. Conversely, the mission of the Center for Naval Analyses, a Federally Funded Research and Development Center, is to provide independent, authoritative research, analysis, and technical support to the Navy and other DOD organizations, and this research is not tied to JPME academic programs. We identified two additional factors that differentiate JPME research institutions among themselves and other DOD-funded research organizations. The first pertains to differences among the JPME research institutions as to which office primarily sponsors the work of research institutions. For example, two JPME research institutions have missions to conduct research on China issues—the China Maritime Studies Institute at the Naval War College and the Center for the Study of Chinese Military Affairs at NDU. However, the two institutions conduct research on different aspects of China, reflecting the interests of their primary sponsors. Specifically, the China Maritime Studies Institute conducts research on Chinese maritime issues primarily for the Navy, while the Center for the Study of Chinese Military Affairs conducts broader research on Chinese strategic-level issues for the Joint Staff and Office of the Secretary of Defense. According to officials from JPME colleges and universities and other DOD-funded research organizations, a second factor that differentiates JPME research institutions among themselves and with other DOD- funded research organizations is the level of technical expertise provided by some research organizations. Specifically, officials explained that Federally Funded Research and Development Centers can produce research with a more scientific and technical focus than that of JPME institutions. For example, while both the Institute for Defense Analyses, a Federally Funded Research and Development Center, and NDU’s Center for Technology and National Security Policy have missions related to researching technology, the Institute for Defense Analyses conducts tests and evaluations of technologies, requiring staff to have specialized scientific and technical skills, while the Center for Technology and National Security Policy’s research discusses the effect of technology on defense policy. Although multiple organizations, including JPME research institutions and other DOD-funded research organizations, have missions to conduct work in similar topic areas, offices throughout DOD use separate processes to request studies and analysis research. This fragmentation across DOD occurs in the absence both of a DOD requirement to coordinate studies and analysis research requirements among the military departments and of other DOD offices and mechanisms to facilitate such coordination. In September 2009, we concluded that offices involved in similar missions should coordinate and share relevant information to avoid unnecessary duplication of work. Furthermore, results-oriented management practices call for establishing a means to operate across organizational boundaries to enhance and sustain coordination. We identified several separate processes used by JPME research institutions or DOD offices to manage requests for studies and analysis research, but DOD has not established formal mechanisms to coordinate requests. JPME research institutions, for example, individually manage their own research activities. According to Joint Staff officials, JPME research activities are not typically coordinated with other departmental offices that request studies and analysis research. At JPME research institutions, researchers have the discretion to determine whether research has been or is being conducted on a given topic. For example, JPME research institution officials told us that while it is not a requirement, they may contact other subject-matter experts to determine whether similar work is being conducted at another JPME research institution. Officials also said researchers may conduct a literature review to understand the existing research on a topic as part of the research process, or they may review completed research projects that are contained in the Defense Technical Information Center database to see whether DOD has funded past studies.not contain information on ongoing research efforts, and no other formal mechanism for sharing information on ongoing studies and analysis research activities within DOD was identified. However, that database does Within the Office of the Secretary of Defense, multiple offices generate requests annually for studies and analysis research, but these research requests are determined based on individual offices’ research requirements and are not formally coordinated with other departmental offices. Office of the Secretary of Defense research requests may be fulfilled by contracting with other DOD-funded research organizations or JPME research institutions to conduct the research. For example, research requests for the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics are managed at the Office of the Secretary of Defense, Studies and Federally Funded Research and Development Center Management office. In doing so, officials with this office explained that they do not formally coordinate with other DOD offices to determine whether similar research requests are being funded by other departmental offices. Separately, the Office of the Under Secretary of Defense for Policy uses a different process to manage requests for studies and analysis research. Specifically, the Office of the Deputy Under Secretary of Defense for Strategy, Plans, and Forces reviews requests from within the Office of the Under Secretary of Defense for Policy on an annual basis. These research requests are not formally coordinated with other Office of the Secretary of Defense offices or other departmental offices, such as service-level studies and analysis offices, to determine whether similar work is being conducted or funded elsewhere. Similarly, the military departments have their own respective internal processes for requesting studies and analysis research, but absent a DOD requirement to do so, these processes are not used to formally coordinate research requests among the military departments or with other DOD offices. In general, these processes are used as a mechanism to coordinate requests for studies and analysis research within each of the military departments. For example: According to a senior Air Force official, the Air Staff’s Analyses, Assessments and Lessons Learned directorate is responsible for collecting annual research requests from across the Air Force and for ensuring that the contracted studies are not duplicative. A senior official within this office told us that the Air Force has experienced challenges with regard to its oversight over the number of studies it has funded. In response, the Air Force has developed a policy to track all Air Force funded studies in an internal database. However, according to this official, the Analyses, Assessments and Lessons Learned directorate generally does not formally coordinate with offices outside the Air Force on annual research requests. The Army Study Program Management Office within Headquarters, Department of the Army, issues an annual call for research requests from Army commands, and in turn funds the research requests according to Army priorities. A senior Army official in this office said that its process is focused on reviewing Army-specific research requests and does not include other DOD offices that request or conduct studies and analysis research. The Navy’s annual research requests are administered through the Office of the Chief of Naval Operations, which compiles and prioritizes research needs identified from across the Navy. The Navy’s studies and analysis program guidance says that the Navy should coordinate analytic efforts with the Marine Corps, but according to an official in this office, the Navy generally does not coordinate with other DOD- funded research organizations or JPME research institutions with regard to these annual research requests. DOD officials within the studies and analysis research community observed that there are both costs and benefits to the department’s decentralized approach to requesting studies and analysis research. One official told us that limited coordination among the multiple offices that request studies and analysis research may put DOD at risk for funding overlapping research activities. Furthermore, a senior Air Force official in the Air Staff’s Analyses, Assessments and Lessons Learned directorate stated that the current approach makes it difficult for DOD to have a complete picture of how much money is being spent on studies and analysis research. While DOD officials identified costs to the current approach for coordinating studies and analysis research requests, officials also acknowledged that DOD’s decentralized approach may result in several benefits. For example, a senior Air Force official in the Air Staff’s Analyses, Assessments and Lessons Learned directorate stated that decentralization generates creativity and diversity of thought in DOD’s studies and analysis community, which can prove useful in informing DOD decision makers. Furthermore, a senior Navy official in the Office of the Chief of Naval Operations stated that the current approach allows each office to be concerned with its own area of functional expertise, which varies widely across the services and DOD. For example, the Office of the Chief of Naval Operations is responsible for funding studies related to naval functional areas such as designing, building, and maintaining ships, which is unique when compared to other service-level studies and analysis offices such as the Air Force Analyses, Assessments and Lessons Learned office or the Army Study Program Management Office. In contrast to how it manages requests for studies and analysis research, DOD has established mechanisms to coordinate science and technology– specific research efforts across multiple departmental offices engaged in similar missions. Specifically, the science and technology research community has governing bodies, such as executive committees to facilitate such coordination. These committees are intended to better manage DOD’s science and technology research by bringing together the multiple departmental offices that sponsor such research for the purpose of sharing information. A senior official responsible for coordinating science and technology research efforts explained that the executive committees do not require additional resources. Rather, they are intended to share offices’ existing annual research plans and provide opportunities to leverage resources in a fiscally constrained environment. Some DOD officials we spoke with who are responsible for managing studies and analysis research requests also said that a mechanism that provided greater information on what studies and analysis research other departmental offices were sponsoring would improve their ability to identify potential overlap in research requests. Without a mechanism for coordinating research requests and sharing information on studies and analysis research activities among multiple offices, DOD cannot ensure that it minimizes potentially unnecessary overlap in research activities. Furthermore, making information on department-wide annual research requests available to JPME research institutions would provide the institutions an opportunity to further understand research needs and align some of the institutions’ research with strategic priorities identified by DOD leadership. Given the ongoing and unique role of JPME research institutions in the development of DOD’s future leaders, DOD’s oversight of these institutions is important for helping the department to make the best use of the resources it devotes to the colleges and universities that provide PME and JPME and for decreasing fragmentation of research requests and the risk of potential overlap in research activities. As fiscal pressures facing DOD continue to mount, so too does the need for the department to prioritize resources for JPME research institutions to most-effectively meet the JPME mission. Considering the overall growth of JPME research institutions that occurred between fiscal year 2007 and fiscal year 2011 in number, funding, and size, it is paramount that DOD maintains oversight of these institutions. Best practices state that achieving results in government requires a comprehensive oversight framework that includes measurable goals and objectives, and metrics for assessing progress. Yet, with limited exceptions, the research institutions lack such goals, objectives, and associated metrics. Additionally, while DOD has some oversight mechanisms in place for JPME, DOD does not have clear guidance establishing the role of JPME research institutions in furthering PME or other departmental goals that would provide a basis for evaluating their performance and helping ensure that intended results are achieved. Further, no entity within DOD is assigned responsibility for overseeing the performance of JPME research institutions. Consequently, DOD cannot ensure the effectiveness of JPME research institutions and lacks a sound basis for making resource determinations. Furthermore, some JPME research institutions and other DOD funded research organizations have missions to conduct research in similar topic areas, but DOD uses a variety of separate processes for requesting studies and analysis research. Results-oriented management practices call for establishing a means to operate across organizational boundaries to enhance and sustain coordination. DOD, however, does not have a mechanism in place to coordinate studies and analysis research requests and minimize fragmentation. While DOD officials believe that their current decentralized approach to requesting studies and analysis research has its benefits, they also recognize that it has its costs. It is not clear that the benefits of DOD’s current approach outweigh the risks of fragmentation and potential duplication, particularly in a budget-constrained environment. DOD’s science and technology research community provides one mechanism for a coordination mechanism, but by no means is it the only mechanism that could meet the needs of the studies and analysis research community as it seeks to support department-wide priorities. Without a mechanism to facilitate coordination and reduce fragmentation among offices requesting studies and analysis research, DOD cannot ensure that it minimizes potential overlap in research activities and that its resources are used efficiently in support of department-wide priorities at its JPME research institutions and other research organizations. To enhance the performance of JPME research institutions, we are recommending that the Secretary of Defense direct the Chairman of the Joint Chiefs of Staff and the Secretaries of the military departments for their respective PME and JPME colleges and universities to take the following three actions: define the role of JPME research institutions to provide a basis for assign responsibilities for conducting performance reviews of JPME establish a framework that includes measurable goals and objectives linked with metrics to assess the performance of JPME research institutions. To improve the coordination of requests for studies and analysis research within the department and to reduce the risk of potential overlap in research activities, we recommend that the Secretary of Defense establish and implement a departmental mechanism that requires leadership from the military services and departmental offices responsible for managing requests for studies and analysis research to coordinate their annual research requests and ongoing research efforts. In written comments on a draft of this report, DOD concurred with our recommendations. The full text of DOD’s written comments is reprinted in appendix IV. In concurring with our first recommendation, DOD noted that some work is already in progress to clarify organizational goals and establish metrics of success at each of the research institutions. DOD stated that, for example, the Joint Staff has been collaborating with NDU to refine its research enterprise. DOD noted that our recommendation is reflected in NDU’s Strategic Plan for Research 2014-2019 and a revision of the Chairman’s policy document for the university. According to the department, our recommendation should be fully implemented when the next academic year begins in the fall of 2014. We agree that these are positive steps towards establishing a comprehensive framework at NDU to systematically assess the performance of its JPME research institutions in meeting PME and other departmental goals and objectives. Notwithstanding this effort, as noted in our report there remains no DOD- wide guidance that addresses the intended role of research institutions in supporting PME, including JPME, or other departmental goals or assigns responsibilities for conducting performance reviews of them. This leaves the department without a sound basis to assess NDU’s and the other research institutions’ stated missions and actual performance against planned or expected results. Clearly establishing linkages between significant activities, their intended role in meeting agency-wide goals and objectives, and assigning oversight responsibilities underpins DOD’s ability to conduct such reviews. In its concurrence with our second recommendation, DOD stated that to improve coordination of research requests, it plans to establish a Studies and Analysis Executive Committee by the end of fiscal year 2014 with regional and topical “communities of interest.” DOD noted that the committee will be a combined effort organized through the Office of the Under Secretary of Defense for Policy and the Assistant Secretary of Defense for Research and Engineering, with other representation from the JPME and PME community, as appropriate. DOD also provided technical comments on a draft of our report, which we have incorporated into the report, as appropriate. We are sending copies of this report to the appropriate congressional committees. We are also sending copies to the Secretary of Defense, the Chairman of the Joint Chiefs of Staff, the Secretary of the Air Force, the Secretary of the Army, the Secretary of the Navy, and the Commandant of the Marine Corps. In addition, the report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-5741 or ayersj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. The National Defense Authorization Act for Fiscal Year 2013 mandated that we review the work performed by joint professional military education (JPME) research institutions in support of professional military education In this report, and the Department of Defense’s (DOD) broader mission.we (1) describe how JPME research institutions have changed in number, funding, and size, and the factors that contributed to any changes; (2) evaluate the extent to which DOD is assessing the performance of JPME research institutions in meeting professional military education and other departmental goals and objectives; and (3) evaluate the extent to which DOD coordinates research requests for JPME research institutions and other DOD-funded research organizations. For the purposes of this report, we refer to service and joint colleges and universities that are accredited by the Joint Staff to provide JPME certification as JPME colleges and universities. JPME colleges and universities. Based on this evidence, we determined that 20 institutions were conducting research as their primary mission and had dedicated personnel assigned to them and therefore were included in the scope of our review. For our first objective of determining the extent to which JPME research institutions have changed in number, funding, and size, and the factors contributing to any changes, we obtained questionnaire responses and other documentation on the number of research institutions that existed at JPME colleges and universities from fiscal years 2000 through 2013 and collected and analyzed available funding and staffing data for these years for the JPME research institutions. We assessed the reliability of the funding and staffing data collected by analyzing questionnaire responses from JPME colleges and universities, which included information on their data-system management, data quality-assurance processes, and potential sources of errors and mitigations of those errors. Based on our review of the data provided and our review of the questionnaire responses, we concluded that the systems used to provide the data, and thus the data they provide, are sufficiently reliable for our audit purposes. However, based on this evidence, we determined that we were unable to report consistent data on JPME research institution funding and staffing levels prior to fiscal year 2007, and therefore we are providing trend data on JPME research institutions from fiscal years 2007 through 2013. Furthermore, although we have identified a number of factors that could affect data quality, we concluded that these were the best available data on JPME research institutions. We also concluded that the data would not lead to an incorrect or unintentional message since it is corroborated through interviews with cognizant officials at the National Defense University, Army War College, Army Command and General Staff College, Air University, Naval War College, and Marine Corps University. We also discussed the reasons for any trends in these budget and staffing data with knowledgeable officials in DOD and at the JPME colleges and universities. Specifically, we conducted interviews with officials from the Joint Staff; Office of the Under Secretary of Defense (Comptroller); National Defense University; Army War College; Army Command and General Staff College; Air University; Naval War College; and Marine Corps University. For our second objective of determining the extent to which DOD is assessing the performance of JPME research institutions in meeting professional military education and other departmental goals and objectives, we obtained and reviewed documentation from the Joint Staff and the JPME colleges and universities that identify any goals, objectives, or performance measures for JPME research institutions. Specifically, we reviewed current strategic plans, mission statements, and other documentation describing activities of the JPME research institutions. We also reviewed documentation describing oversight mechanisms that monitor the academic quality of JPME colleges and universities. Specifically, we reviewed Chairman of the Joint Chiefs of Staff Instruction (CJCSI) 1800.01D, Officer Professional Military Education Policy, which is the DOD policy that governs the Joint Staff’s Accreditation of Joint Education process. We also reviewed reports prepared by external regional accrediting bodies.processes used to assess the performance of JPME research institutions, we used a standard set of questions to interview officials with the Joint Staff; Army, Navy, Marine Corps, and Air Force; National Defense University; Army War College; Army Command and General Staff College; Air University; Naval War College; and Marine Corps University. We then reviewed the results of the interviews and related documents to develop summary findings. In reviewing this documentation and testimonial evidence, we referred to our prior work on best practices that identifies elements that constitute a comprehensive oversight framework. To further our understanding of any Specifically, best practices state that such a framework should include measurable goals and objectives linked with metrics for assessing progress, which is consistent with the framework identified in the Government Performance and Results Act (GPRA), as amended by the GPRA Modernization Act of 2010. We also reviewed this evidence in light of key internal control standards, which state that federal agencies should conduct reviews by management at the functional or activity level and compare actual performance to planned or expected results. For our third objective of determining the extent to which DOD coordinates research requests for JPME research institutions and other DOD-funded research organizations, we included the 20 JPME research institutions discussed above and 14 other DOD-funded research organizations. To identify the other DOD-funded research institutions to include in the scope of our review, we gathered documentation from the Assistant Secretary of Defense for Research and Engineering and canvassed knowledgeable DOD officials in offices responsible for requesting research, such as the military departments, science and technology executive agents, studies and analysis research program managers, and the Office of the Secretary of Defense, including the offices of the Under Secretary of Defense for Acquisition, Technology and Logistics and the Under Secretary of Defense for Policy. Based on this work, we determined that we would include the following 14 other DOD- funded research organizations for the purposes of our review: DOD’s Regional Centers for Security Studies: George C. Marshall European Center for Security Studies, the Near East South Asia Center for Strategic Studies, the William J. Perry Center for Hemispheric Defense Studies, and the Africa Center for Strategic Studies; the Naval Postgraduate School Modeling, Virtual Environments and Simulation Institute, the Naval Postgraduate School’s Center for Interdisciplinary Remotely Piloted Aircraft Studies, the Naval Postgraduate School’s Cebrowski Institute for Information and Innovation, the Army’s Center for Army Analysis, and U.S. Army Training and Doctrine Command Analysis Center; Federally Funded Research and Development Centers that were identified as studies and analysis research centers: Center for Naval Analyses, RAND Project Air Force, RAND National Defense Research Institute, and the Institute for Defense Analyses. To further our understanding on the processes used to request studies and analysis research from JPME institutions and other DOD-funded research organizations, we reviewed documentation from and interviewed officials at the military services, Office of the Secretary of Defense, JPME research institutions and other DOD-funded research organizations. We also reviewed documentation and interviewed knowledgeable agency officials about DOD’s approach to coordinate research requests among DOD organizations. Specifically, we used a standard set of questions to interview officials with the Office of the Secretary of Defense and the military services about the processes used to share information with other offices that also request studies and analysis research. We then reviewed the results of the interviews and related documents to develop summary findings. We reviewed the documentary and testimonial evidence in light of key practices for enhancing and sustaining coordination as described in best practices. Specifically, best practices state that organizations involved in similar missions should coordinate and share information to avoid unnecessary duplication of work. Further, we assessed whether there were any similarities or dissimilarities among the missions of the JPME research institutions and other DOD-funded research organizations. We did this in four ways: First, we assessed whether the JPME and DOD-funded research institutions were conducting science and technology–related research or studies and analysis–related research, using missions statements and other documentation provided by DOD to make this determination. We did not conduct further analysis on other DOD- funded organizations that conduct science and technology research as that type of research represented a notable difference from the JPME research institutions that primarily conduct studies and analysis research. Second, for all the JPME and DOD-funded research institutions conducting studies and analysis research, we reviewed mission statements and other mission-related information provided by DOD, and categorized the organization’s mission as falling primarily into 1 or more of 23 areas of concentration—for example, Asia studies or leadership and ethics studies. To create the 23 areas of concentration that were used to categorize mission statements and research project titles, we reviewed documentation from the JPME research institutions that identified general topic areas in which the institutions conducted research. We also reviewed documentation from Federally Funded Research and Development Centers that identified the core topic areas within which the Federally Funded Research and Development Centers were authorized to conduct work. The areas of concentration are identified as follows: Africa; Asia; Europe; Middle East; Western Hemisphere; civilian-military issues and irregular warfare; cyber; energy and environment; force structure and operational issues; historical; intelligence; leadership and ethics; legal; logistics; nuclear and weapons of mass destruction; other; personnel and training; public affairs and communication; resource management; strategy, policy, and doctrine; technology, acquisition, and systems; war gaming; and unable to code. We determined that these areas of concentration we selected were appropriate for comparing JPME research institutions and other DOD-funded research institutions because they explain the focus of each organization’s primary studies and analysis efforts. To complete the content analysis, two GAO analysts independently reviewed the mission statements and other mission-related information provided by DOD and coded them into one or more of the 23 areas of concentration. When the coding was completed, both analysts reviewed every code made by the other analyst and indicated whether they agreed or disagreed with the code. The analysts then met to discuss their coding determinations and to reach agreement where there were any discrepancies. The results of our analysis are not generalizable beyond the 20 JPME research institutions and 14 other DOD-funded research institutions included in the scope of our review. Third, for all the JPME and DOD-funded research institutions conducting studies and analysis research, we collected a list of research projects they conducted for fiscal years 2012 and 2013. To complete the content analysis, one GAO analyst independently reviewed each of the 2,217 research project titles and coded them into one or more of the 23 areas of concentration. When the coding was completed, two GAO analysts shared responsibility to review the coding made by the first analyst and indicated whether they agreed or disagreed with the code. The analysts then met to discuss their coding determinations and to reach agreement where there were any discrepancies. The results of our analysis are not generalizable beyond the 20 JPME research institutions and 14 other DOD-funded research institutions included in the scope of our review. Fourth, we reviewed documentation about the offices that request research from the JPME and other DOD-funded research institutions, along with testimonial evidence gathered during our interviews with DOD officials to provide context for any similarities or dissimilarities we identified through the mission statement and project title analysis. To further our understanding of DOD’s processes for requesting research and of the similarities and differences among research organizations, we conducted interviews with officials from the Joint Staff; Office of the Under Secretary of Defense (Comptroller); Office of the Under Secretary of Defense for Policy; Office of the Secretary of Defense, Studies and Federally Funded Research and Development Center Management Office; Office of the Assistant Secretary of Defense for Research and Engineering; Army Study Program Management Office; Air Force Analyses, Assessments and Lessons Learned office; Office of the Chief of Naval Operations; Marine Corps Analysis Directorate; National Defense University; Army War College; Army Command and General Staff College; Air University; Naval War College; and Marine Corps University. We conducted this performance audit from February 2013 through March 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix contains more-detailed information for each of the 20 joint professional military education (JPME) research institutions included in the scope of our review. For each research institution, we provide a one- page summary that includes information on the following elements: Location: the institution’s associated JPME college or university and geographical location. Background: the date that the institution was established and information on its establishment, as well as any relevant information such as changes in the research institution’s name. Mission: the institution’s mission, either self-reported or as derived from relevant documentation. Customers: a list of entities, as reported by the research institutions and other documents that represent the principal requesters and users of the research institutions’ research products. Nature of research and publications: a summary description of the types of research and studies conducted by the research institution, as well as the names of publications produced by the institution, if any. the institution’s total funding, depicted in thousands of dollars, for fiscal years 2004 to 2013, as available. Total funding is subdivided into two categories—direct funding and reimbursable funding. Direct funding includes federal appropriations made available for JPME colleges or universities. Reimbursable funding refers to amounts earned or collected from outside offices for research services furnished by the institution. Total staffing: the institution’s total number of personnel, expressed as full-time equivalents for fiscal years 2004 through 2013, as available. Full-time equivalents are calculated as the total hours worked divided by the number of compensable hours in a full-time schedule. The Air Force Research Institute augments Air University’s and the Air Force’s research capacity and supports airpower research inquiries from the Chief of Staff of the Air Force, as well as other decision makers throughout DOD. The institute conducts research on topics related to air, space, and cyberspace opportunities, threats, and capabilities; evaluation of operational and strategic issues; conducts regional strategic assessments; estimates long-term strategic and technical capabilities; analyses of logistical constraints and basing issues, among other issues. The institute also serves as the focal point and provides support for Air University’s “Call for Topics,” which makes potential research topics of interest to Air Force leaders available to student researchers. The institute also operates the Air University Press, and publishes the Department of the Air Force’s Air and Space Power Journal, and the Strategic Studies Quarterly. Background: Established in 2008 by a Special Order from Headquarters Air Force, the Air Force Research Institute’s research roots extend back to the Airpower Research Institute of the late 1970s and even the 1930s in the Air Corps Tactical School. The organizational functions were previously embedded in the Air War College and later the College of Aerospace Doctrine, Research and Education. Location: Air University, Montgomery, AL Background: The Center for Strategy and Technology was established at the Air University in 1996. looking 20 to 30 years into the future to provide a vision to prepare the Air Force for future challenges. Research conducted under the auspices of the center is briefed to the Air Staff, published as occasional papers and disseminated to senior military and political officials, think tanks, educational institutions, and other interested parties. Mission: To engage in long-term strategic thinking about the implications of technological change and its implications for U.S. national security. The Center for Strategy and Technology focuses on education, research, and publications that support the integration of technology into national strategy and policy. Nature of Research and Publications The center provides research articles, papers, and monographs addressing issues pertinent to U.S. military-response options for dealing with nuclear, biological, and chemical threats and attacks. Research topics include military and diplomatic policy and concepts related to weapons of mass destruction; international nonproliferation diplomacy; nonproliferation and arms control treaty regimes; counterterrorist activities; and nuclear deterrence of conflicts. The center develops a strategic-issues list that provides potential research topics to student and faculty researchers. Additionally, it conducts outreach on issues related to counterproliferation and nuclear operations through its publication of Counterproliferation Center Outreach Journal, and the Trinity Site Papers series, and through an annual conference on countering weapons of mass destruction. The Center for Army Leadership conducts research and studies to identify leader development trends and requirements, and to develop and promote leadership and leader development practices and techniques for the Army. Further, the Center accomplishes these outcomes by contributing to Army doctrine and policy by informing leadership on best practices for developing leader competencies, and by producing the Annual Survey of Army Leadership, which is a survey- based study that assesses Army-leader attitudes regarding leader education, including the quality of leadership, and the contribution of leadership to accomplishing the Army’s overall mission. Background: The Center for Army Leadership was established in 2001 in response to a study chartered by the Chief of Staff of the Army to identify the characteristics and skills required for Army leaders in light of changes to the operational environment. The Combat Studies Institute researches, writes, and publishes, through the Combat Studies Institute Press, original interpretive works on doctrinal and operational issues of relevance to the U.S. Army and policymakers. The center also implements U.S. Army Training and Doctrine Command’s program of military history instruction throughout the Army, conducts an oral history research program that targets Command and General Staff College students and faculty, as well as visitors of the Combined Arms Center, focusing on compiling their past operational experiences, and provides oversight responsibilities for the Combined Arms Center Command History program, the Staff Ride team—which offers live and virtual battlefield tours—and the Frontier Army Museum. Background: The Combat Studies Institute was established in 1979 to provide a range of military historical and educational support to the Combined Arms Center, Training and Doctrine Command, and the United States Army. Mission: To provide military historical and educational support to the Combined Arms Center, Training and Doctrine Command, and the United States Army. Location: Army War College, Carlisle Barracks, PA Background: Established in 1954 by the Commandant of the Army War College to create an advanced study group to undertake a program of long-range thinking on strategy and land power. The Strategic Studies Institute conducts research on topics such as the future of American strategy; geostrategic analyses; strategic landpower; Army forward presence in the Pacific; cyber security; energy security; Army’s role in missile defense; effects of war on leadership; and Army profession and public trust. The institute also compiles a Key Strategic Issues List based on input from the U.S. Army War College faculty, the Army Staff, the Joint Staff, the unified and specified commands, and other Army organizations. This is designed to guide the research of the Strategic Studies Institute, the U.S. Army War College, and other Army- related strategic analysts. Mission: To conduct and disseminate independent strategic analysis that develops recommendations for addressing key national security issues. Examples of Customers - Appendix II: Joint Professional Military Education Research Institutions - Nature of Research and Publications The Translational Research Group at the Center for Advanced Operational Culture Learning aims to link the findings of scientists with the needs of Marine Corps soldiers and leadership by helping the two sides understand each other’s needs and capabilities. Location: Marine Corps University, Quantico, VA Background: The Translational Research Group was established in 2010 by the Director of the Center for Advanced Operational Culture and Learning at the Marine Corps University and by the Executive Deputy of the Marine Corps Training and Education Command. Mission: To identify practical applications for social and behavioral scientific research that will help address pressing challenges facing the Marine Corps. The History Division’s primary task is to research and write the Marine Corps’ official history. The division provides assistance through their reference branch and deploying field historians to record history in the making during operations. The History Division and also conducts research through an oral history program, in which it obtains, catalogs, transcribes, and preserves personal narrative experiences and observations of historic value from active-duty and retired Marines for use as reference source material. The division prepares a wide variety of official publications that tell the Marine Corps story. Publications include articles, monographs, occasional papers, and definitive histories. It also creates material for and publishes Fortitudine, an online bulletin of the Marine Corps history program. Background: Established in 1919 by Marine Corps Headquarters to record, preserve, and distribute the Corps’ history, the History Division was transferred in 2005 to Marine Corps University. Mission: To write, document and track the history of the Marine Corps across the entire spectrum of time; to collect documents and accounts of permanent value to the history of the Marine Corps and preserve them for future use; and distribute the history of the Corps through publications, papers and other programs, in order to preserve history, aid combat and noncombat decision making, support Professional Military Education, motivate Marines, and inform the American public. Examples of Customers - Appendix II: Joint Professional Military Education Research Institutions - Nature of Research and Publications In an effort to improve the Marine Corps’ understanding of the complex security environment of the Middle East, the center began three forms of publications. The Middle East Studies Occasional Paper Series, with the first issue published in June 2011, aims to disseminate original, peer-reviewed research papers on a wide variety of subjects pertaining to the Middle East, Afghanistan, and Pakistan. The MES Monograph Series, with the first issue published in August 2011, focuses on subjects of strategic relevance to the current and future U.S. professional military education community and is meant to be published quickly to address fast- developing situations. Finally, the Middle East Studies institute publishes Insights, which is produced bimonthly as the newsletter of the center. It features short analytical pieces as well as information on events organized by the center and provides a forum for debate with readers. Nature of Research and Publications The center collects and analyzes interagency lessons from the field on overseas contingency operations, including stabilization, irregular warfare, and security assistance, and integrates them into joint military doctrine on such topics as counterinsurgency, stability operations, security cooperation, and interagency coordination; as well as into education, policy, training, and joint military/interagency exercises. The center also analyzes interagency aspects of overseas operations on behalf of Department of Defense, the Intelligence Community, and several federal agencies. The center’s principal journal, PRISM, serves to inform members of U.S. federal agencies, allies, and other partners on complex and integrated national security operations; reconstruction and nation- building; relevant policy and strategy; lessons learned; and developments in training and education. The center also produces publications on issues of importance to interagency stakeholders and JPME, such as its recent book Convergence: Illicit Networks in the Age of Globalization. Fiscal Year 2004-2013 Total Funding Office of the Under Secretary Combatant Commands U.S. Department of State U.S. Agency for International - Appendix II: Joint Professional Military Education Research Institutions - Nature of Research and Publications To fulfill its mission, the Conflict Records Research Center was charged with organizing, encouraging, and facilitating greater analytic and academic access to digitized copies of captured documents; coordinating the translation of captured documents of interest; conducting sponsored research and analysis on captured documents; developing and delivering training programs and providing research assistance; informing researchers of the collections in its custody; and publishing research in books, reports, journal articles, conference papers, newsletters, or other media. Location: National Defense University, Washington, DC Background: Established in 2009 at the direction of the Office of the Under Secretary of Defense for Policy as a subelement of the Institute for National Strategic Studies. The center was defunded for fiscal year 2014, but in September 2013, the Office of the Under Secretary of Defense for Policy provided funding to keep the center open. Section 1071 of the National Defense Authorization Act for Fiscal Year 2014 provided statutory authorization for the center. About the Center Location: National Defense University, Washington, DC Background: Established in 1984 by the Secretary of Defense as the Research Directorate of National Defense University’s Institute for National Security Studies, the center was originally charged to provide independent advice to Secretary of Defense, the Chairman of the Joint Chiefs of Staff, and the combatant commands for the formulation of national security policy and strategy. The center was renamed in 2010 during the university’s research reorganization. The Center for Strategic Research performs research and educational activities in support of joint professional military education and explores strategic and regional topics to offer advice and strategic support to the Office of the Under Secretary of Defense for Policy, the Joint Staff, and other senior DOD officials. The center also conducts outreach to share its research with policymakers through studies, reports, briefings, and memorandums. Strategic studies encompass national security and military strategy, to include defense policy and organization, deterrence, arms control and counter proliferation, peace operations and small-scale contingencies, transnational security problems, command and control, and future warfare. Regional studies encompass national security strategy, defense policy, defense cooperation, and military strategy issues as they relate to significant countries or geographic areas of the world such as Asia and the Middle East. The center’s publication product line includes books, Occasional Papers, Strategic Perspectives, Strategic Forum (policy papers), conference papers, and journal articles. Mission: To provide educational support to joint professional military education and advice to the Secretary of Defense, Chairman of the Joint Chiefs of Staff, and the combatant commands through studies, reports, briefings, and memorandums; conducts directed research and analysis in the areas of strategic studies and regional studies; and engages in independent and leading- edge research and analysis in related areas. Location: National Defense University, Washington, DC Background: Established in 2000 as a part of National Defense University’s Institute for National Strategic Studies, pursuant to the National Defense Authorization Act for Fiscal Year 2000. The center’s research focuses on documenting China’s expanding international interests, understanding China’s development and employment of new economic, military, and diplomatic capabilities, and analyzing Chinese debates about how these capabilities should be employed to advance national goals. It also explores the implications of these developments for U.S.-China relations and for the U.S. role in Asia. The center also conducts outreach to share its research with policymakers and informs the public debate through books, articles, memorandums, briefings and conferences. For example, the center cosponsors an annual conference on the People’s Liberation Army with the Council for Advanced Policy Studies (a Taiwanese think tank), and RAND, a nonprofit institution that conducts research and analysis. The center’s publication product line includes books, Occasional Papers, Strategic Perspectives, Strategic Forum (policy papers), conference papers, and journal articles. Mission: To serve as a national focal point and resource center for multidisciplinary research and analytic exchanges on the national goals and strategic posture of the People’s Republic of China and the ability of that nation to develop, field, and deploy an effective military instrument in support of its national strategic objectives. University, Washington, DC Background: Established in 1994 as the Center for Counterproliferation Research, pursuant to memoranda of understanding among the Assistant Secretary of Defense for Nuclear Security and Counterproliferation, the Director of Strategic Plans and Policy, the Joint Staff, and the President, National Defense University. The center was renamed the Center for the Study of Weapons of Mass Destruction in 2004. The Center for the Study of Weapons of Mass Destruction performs research on the full spectrum of issues related to weapons of mass destruction, engages in educational activities, and collaborates with partners across the government. The center conducts directed and self-initiated research of the following types: (1) operational and policy support; (2) traditional academic research; and (3) research undertaken to support the joint professional military education program at the National Defense University. Research topics related to studies in weapons of mass destruction include deterrence; counterproliferation operations; and policy and doctrinal development regarding weapons of mass destruction. The center also conducts outreach to share its research through papers, and planning and participating in various venues, including conferences and dialogues with participants from U.S. and foreign partner entities. For example, the center organizes the annual meeting of the Weapons of Mass Destruction Education Consortium. Mission: To prepare the joint warfighter and select others to address the challenges posed by weapons of mass destruction through education and professional development, scholarship, and outreach and collaboration activities across the full spectrum of issues related to weapons of mass destruction and to become one of the preeminent institutions in the United States for weapons of mass destruction expertise. Nature of Research and Publications The Center for Technology and National Security Policy conducts research on a reimbursable basis by means of memorandums of understanding with sponsoring organizations on science and technology (chemical/biological defense, human hardiness research, counter–improvised explosive devices, policing, and counterinsurgency); civilian-military integration (transformative innovation for development and emergency support, social media in strategic communication); emerging challenges (anticipatory governance concerning cyber security; climate change; vulnerability to severe space weather); and advanced education initiatives. The center’s publication product line includes books, Defense Technology Papers, Defense Horizons (policy papers), conference papers, and journal articles. The Center for Transatlantic Studies provided senior Department of Defense and other U.S. government leaders with the North Atlantic Treaty Organization (NATO) and Transatlantic policy advice, research, and outreach, notably in the run-up to the 2012 NATO Summit and beyond. Specifically, the center conducted research on capabilities studies, evaluating transatlantic bargain and dialogue, and, NATO–Russia relations, and NATO’s countering hybrid threats. It also published Transatlantic Currents, CTSS Flash notes, and Transatlantic Perspectives. Background: The Center for Transatlantic Security Studies was formed in 2010 to support the Under Secretary of Defense for Policy in North Atlantic Treaty Organization/European–related policy development. The center incorporated the NATO Orientation Program that had provided training to NATO assigned officers since at least 1990 as was mandated by the Chairman of the Joint Chiefs of Staff. As of October 2012, both the center and the NATO Orientation Program were dissolved. Mission: To be the focal point for national and international collaboration on issues related to transatlantic security, defense policy, and military strategy through research, education, and outreach. Develops and conducts education and orientation programs for U.S. and allied military officers, government civilians, and international partners on issues relating to NATO and transatlantic security community. Examples of Customers Under Secretary of Defense for - Appendix II: Joint Professional Military Education Research Institutions - Nature of Research and Publications The China Maritime Studies Institute has four primary areas of activity: (1) broad, multidisciplinary research on China’s maritime activity as it relates to its strategic orientation; (2) annual conferences and speaker series; (3) publications, ranging from short assessments and think pieces to monographs and books; and (4) support for U.S. Navy and joint commands. The center conducts research in areas related to China’s maritime development, including energy, global commerce, law of the sea, maritime technologies, merchant marine, naval development, naval diplomacy, and shipbuilding. Background: The China Maritime Studies Institute was established as a subcomponent of the Strategic Research Department on October 1, 2006, in accordance with a Program Objective Memorandum by the Chief of Naval Operations. Research Department, it receives dedicated funding on an annual basis for its research activities, including to fund its own researchers. As a result, we have categorized the China Maritime Studies Institute as a separate joint professional military education research institution for the purposes of this report. However, because the center is a subcomponent of the Strategic Research Department, Naval War College officials stated that it shares budgetary, personnel, and administrative functions with the Strategic Research Department. Location: Naval War College, Newport, RI Background: The International Law Department was founded in 1984 and opened its doors in 1986. The International Law Department serves as the Naval War College’s focal point for the study of international and maritime law and oceans policy as they affect U.S. military policy, strategy and operations. As part of its research efforts, the department compiles, edits, and publishes the International Law Studies Series that provides a forum for prominent legal scholars to publish articles that contribute to the broader understanding of international law. Recently, in response to discussion with the Joint Staff, the department initiated an Information Paper Series. These are short papers that break down legal issues for further consideration by senior military leaders. Furthermore, in addition to the Information Paper Series and International Law Studies, the individual staffers of the department engage in independent research and writing. In addition to legal research and scholarly writing, Naval War College officials state that staff members actively support the Naval War College’s core intermediate and senior- level Navy PME courses as professors, lecturers, and moderators, while hosting several operational-law electives throughout the academic year. Mission: To conduct and disseminate advanced international law research and analysis. The Strategic Research Department’s research projects, including those of the China Maritime Studies Institutes, fall into three broad categories: (1) projects assigned by the Navy or another U.S. national security organization; (2) sustained projects that do not depend on year-to-year tasking but rather constitute multiyear, multideliverable, multiclient investments serving long-term U.S. national security and Navy interests (these projects focus on strategic regions such as Eurasia, Africa, the greater Middle East and the Asia-Pacific region, as well as on functional topics such as maritime strategy, cyber conflict, and sea- based ballistic missile defense); (3) self-sponsored projects conducted in consultation with the leadership of the Center for Naval Warfare Studies (these projects have no specific tasking or set of clients, rather, they address emerging issues that officials believe will garner substantial national or naval attention in the foreseeable future). Background: Formally established in 1987 by the Dean of the Center for Naval Warfare Studies, and in consultation with the Chief of Naval Operations. However, the Strategic Research Department dates back to the origins of the Center for Naval Warfare Studies in 1981. Mission: To produce innovative strategic research and analysis for the U.S. Navy, Department of Defense, and the broader national security community. Secretary of the Navy Chief of Naval Operations U.S. Marine Corps leadership Combatant commands Federal agencies - Appendix II: Joint Professional Military Education Research Institutions - Nature of Research and Publications Each year, the Chief of Naval Operations selects a broad governing theme for the Strategic Studies Group’s research. The 2013 topic is Undersea Dominance out to 2030. The Strategic Studies Group is responsible for keeping the Chief of Naval Operations informed of progress throughout the year and produces a summary briefing and written report of actionable concepts with recommendations that can be executed by the Chief of Naval Operations in the near term. The products, while encompassing long-term views, are designed to help inform the Chief of Naval Operations on near- and mid-term program decisions. This appendix contains the results of our research project title analysis. We reviewed 2,217 research project titles from 20 joint professional military education research institutions and 14 other DOD-funded research organizations. These titles are associated with projects conducted in 2012 and 2013. Based on a project’s title, we coded it into one or more areas of concentration. The table below presents the number of research project titles coded into these 23 areas of concentration. The results do not add to 2,217 because some research project titles could be coded into more than one area of concentration and two areas of concentration are not included in the final results. For example, a research project title on North Korea’s nuclear future would be coded into both the “Asia” and “nuclear and weapons of mass destruction” topic areas. In addition to the contact named above, Matthew Ullengren, Assistant Director; Erin Behrmann; Richard Burkard; Gabrielle A. Carrington; Alberto Leff; Marcus Lloyd Oliver; Michael Silver; and Cheryl Weissman made key contributions to this report.
DOD's colleges and universities that provide JPME, including their research institutions, are intended to develop military personnel throughout their careers by broadening them intellectually and fostering collaboration across the military services. JPME research institutions generally provide studies and analysis research that can support academic programs or inform DOD policymakers. The National Defense Authorization Act for FY 2013 mandated GAO to review JPME research institutions. GAO's report (1) describes how JPME research institutions have changed in number, funding, and size; (2) evaluates the extent to which DOD assesses JPME research institution performance; and (3) evaluates the extent to which DOD coordinates the research requests of these and other DOD-funded research organizations. GAO identified and examined the 20 JPME research institutions that conduct research as their primary mission and have dedicated personnel. GAO reviewed DOD documents and interviewed officials on changes at the 20 institutions and how they are overseen, as well as the processes to coordinate their research activities and those of 14 other DOD-funded research organizations GAO determined conduct research activities. Joint Professional Military Education (JPME) research institutions, particularly at the National Defense University, experienced growth in number, funding, and size in terms of staffing levels from fiscal year (FY) 2007 through FY 2011, but the number of institutions as well as funding and staffing levels declined over the past 2 years. For example, total funding for JPME research institutions increased from $30.8 million in FY 2007 to $47.7 million in FY 2011, but subsequently decreased to $40.6 million in FY 2013. GAO identified several factors that contributed to these institutions' growth, including increases in funding provided by outside organizations for research and the creation of new research institutions. Department of Defense (DOD) officials reported that DOD-wide budget reductions, including the effects of sequestration, contributed to decreases in the number, size, and funding for JPME research institutions. The extent to which DOD can assess the performance of JPME research institutions is limited by the lack of a comprehensive framework to systematically assess their performance in meeting professional military education and other departmental goals and objectives. JPME colleges and universities have not consistently established measurable goals or objectives linked with performance metrics for their associated research institutions. Best practices state that achieving results in government requires a framework with measurable goals and objectives and metrics to assess progress. Further, oversight mechanisms for the colleges and universities, such as accreditation processes, focus on the quality of JPME academic programs and not on the research institutions' performance. There is no DOD-wide guidance that addresses the intended role of the research institutions in supporting JPME or other departmental goals, or assigns responsibilities for conducting reviews of them, leaving the department without a basis to assess the institutions' stated mission and actual performance against planned or expected results. Therefore, DOD does not have a basis to assess the institutions' missions and performance against expected results, as called for by best practices. Without measurable goals and objectives linked with performance metrics, and clear guidance on their intended roles and assignment of oversight responsibilities, DOD cannot ensure JPME research institutions are effectively accomplishing their missions. DOD has not established mechanisms to coordinate requests for research conducted by JPME research institutions and other DOD-funded research organizations because there is no requirement to do so. Although many of these organizations have missions to conduct research in similar topic areas, DOD uses a variety of processes to request studies and analysis research. Specifically, offices within the Office of the Secretary of Defense and the military departments each have their own separate internal processes to manage research requests and do not participate in one another's processes. Best practices on managing for results state that organizations involved in similar missions should coordinate and share information to avoid unnecessary duplication of work. At a time of constrained budgets, fragmentation in DOD's approach to managing its research requests across the department exposes DOD to the risk of potential overlap of studies and analysis research. GAO recommends that DOD take actions to define the role of JPME research institutions, assign responsibilities for assessing performance, and establish a mechanism to coordinate studies and analysis research requests. DOD concurred with the recommendations.
In 1994, U.S. consumers spent over $600 billion on food—about $334 billion for consumption at home and $268 billion for consumption outside the home. To regulate the safety of this food, the federal government spends over $1 billion annually, and state governments and industry spend unknown additional amounts. However, foodborne illnesses still occur and are a continuing health and economic concern. The Centers for Disease Control and Prevention (CDC), in the Department of Health and Human Services, estimates that over 6 million illnesses and about 9,000 deaths resulting from foodborne pathogens occur each year.These illnesses and deaths are very costly. For example, FSIS estimated that nearly 5 million illnesses and about 4,000 deaths were caused by meat and poultry products in 1993, at a cost estimated to be from $4.5 billion to $7.5 billion. Twelve federal agencies implement as many as 35 food safety and related laws. The responsibilities of these agencies are outlined in appendix I. FDA, which has primary responsibility for the safety of all foods except meat and poultry, carries out its responsibility through physical inspections of food-processing plants. FSIS, which has primary responsibility for meat and poultry safety, carries out its responsibility largely through organoleptic inspections of meat and poultry—that is, using sight, smell, and touch—to determine the wholesomeness of products at slaughter plants. These carcass-by-carcass inspections date back to the turn of the century. These continuous inspections at slaughter plants, along with FSIS’ daily inspections of processing plants, account for about one-half of the federal government’s expenditures for food safety. In 1992, we reported that this historic approach to food safety is not well suited to preventing the largest current threat to the food safety system—microbiological contamination. We suggested moving to HACCP systems. In December 1995, FDA issued a final regulation, effective in December 1997, that requires fish- and seafood-processing plants to establish HACCP systems. FSIS’ proposed HACCP regulation for meat and poultry was published in February 1995. The final regulation is expected in early 1996, and FSIS plans for it to become effective in 1997 and be phased in over a period of years. The National Marine Fisheries Service (NMFS) issued its final regulation for its ongoing, voluntary HACCP-based inspection program for fish and seafood products in July 1992. As of February 1996, 88 plants were being inspected under this program. Plants are charged a fee for the inspections. The overall structure of and approach taken by the federal food safety system is much the same as it was in 1989. FDA and FSIS are still primarily responsible for regulating food safety. Both agencies continue to physically inspect food-processing plants and products to detect food safety hazards. FDA’s inspection frequency continues to be constrained by resources—in 1989, the agency inspected each plant, on average, once every 3 to 5 years. Currently, FDA plans to inspect food-processing plants once every 8 years, on average. FSIS continues to rely primarily on daily organoleptic inspections to detect contamination in meat and poultry. FSIS’ organoleptic methods are not designed to detect microbiological contamination—the most serious threat to human health from meat and poultry. Both agencies continue to conduct some chemical analyses of products to detect chemical contamination. While the overall structure of and approach taken by the federal food safety system have not changed, FDA and FSIS have both experienced some internal reorganizations in recent years. FDA, for example, reorganized its Center for Food Safety and Applied Nutrition along commodity lines—so there is now the Office of Seafood, for example—rather than by scientific discipline, such as microbiology. Similarly, during its reorganization, USDA transferred all of its food safety activities to FSIS. For example, USDA transferred to FSIS responsibility for (1) inspecting egg products from the Agricultural Marketing Service and for (2) identifying research needs and coordinating efforts among government, industry, and academia on food safety in animal production from the Animal and Plant Health Inspection Service. In addition to FDA and FSIS, 10 other agencies have limited food safety responsibilities and have had no or limited change in their duties since 1989. Table 1 sets forth the 12 agencies and their responsibilities. While the agencies’ structures and approaches to food safety have remained essentially the same over the last 5 years, new congressional mandates and the growth of the food sector have resulted in increased budgets and greater workloads. For example, in 1990, the Congress enacted food-labeling legislation that requires food companies to provide nutrition information so that consumers can make informed choices. FDA and FSIS were both involved in developing and overseeing these new requirements. In addition, from 1989 through 1994, the food sector grew by about $89 billion (about $50 billion in constant dollars), and there has been a large increase in the number of animals slaughtered. From fiscal year 1989 through fiscal year 1994, the 12 agencies’ budgets increased from $851 million to nearly $1.2 billion, an increase of about $170 million when adjusted for inflation. For FDA and FSIS—the two principal food safety agencies—funds for food safety increased by about 37 percent and about 14 percent, respectively, in constant 1989 dollars. The remaining 10 agencies either lost funding or had small increases. While responsibilities and budgets increased over this period, staffing remained constant at about 17,000 employees. Table 2 gives information on funding and staffing levels for the 12 agencies for fiscal years 1989 and 1994. In the face of increased responsibilities and workloads, FDA and FSIS have reduced the number of food safety inspections and shortened the length of the inspections in each plant, respectively. While FDA has had an increase in its number of inspectors, it performed fewer food safety inspection activities than it did in 1989. Although the number of food-processing plants for which FDA has inspection responsibility remains about the same, at 53,000, other activities for which it has responsibility, such as ones involving blood banks and plants that manufacture medical devices, have higher priority than inspecting food-processing plants. To meet its increased responsibilities, FDA reduced the number of food safety inspections in its operating plan from about once every 3 to 5 years, on average, in 1989 to about once every 8 years in 1994. As a result, the number of food plants FDA inspected dropped from 6,368 in 1989 to 4,799 in 1994. FSIS’ workload also increased because of the growth in the number of animals being slaughtered. Because its staff has not increased sufficiently to carry out carcass-by-carcass and bird-by-bird inspections under its traditional practices, FSIS has taken a number of steps, including having supervisors conduct slaughter inspections, reducing the amount of time spent on inspecting processing plants, and increasing the number of processing plants that inspectors cover. Table 3 shows the increase in the number of animals slaughtered. Three federal food safety agencies are embracing HACCP programs, which will fundamentally alter the federal food safety system and industry operations for ensuring meat, poultry, and seafood safety. FDA and FSIS have proposed mandatory HACCP systems for meat, poultry, and seafood. NMFS has adopted a voluntary HACCP-based inspection program for seafood. In contrast to the current system, these initiatives emphasize the detection and prevention of microbiological contamination by the industry and call for the industry’s increased accountability for food safety. Federal agencies’ inspection roles will also change—in addition to detecting safety hazards, the agencies will oversee the plants’ HACCP systems. Under these HACCP initiatives, industry is responsible for identifying the points where any microbiological, chemical, and physical safety hazard may occur in food production—known as the critical control points—and establishing procedures at those control points to detect and/or prevent such hazards. In addition, plants are required to document their activities, including establishing a record of actions taken to address any safety hazards. FDA’s and FSIS’ current inspection systems concentrate on detecting physical contamination and abnormalities and plants’ compliance with good manufacturing practices and sanitation procedures. While each agency performs some testing for microbiological and chemical contamination, these activities are currently a small part of the overall inspection activities. In contrast, HACCP systems call for plants to employ quality control procedures designed to identify opportunities for preventing all safety hazards, including microbiological contamination—the most serious food safety threat. FDA and FSIS plan to continue their inspection activities. In addition, the agencies will oversee the plants’ HACCP systems to ensure that each plant implements and operates an effective system. The scientific community has specified that in order for HACCP systems to be effective, there must be two components: (1) Each plant in the industry must implement an effective HACCP plan, and (2) federal agencies must inspect each plant’s HACCP-based quality control system to ensure that it is working as designed. Furthermore, to ensure the systems’ integrity, the National Academy of Sciences has recommended that the level of federal inspections be based on the compliance history of each plant and that the risk of the product be based on the safety hazards of each step in production. However, because of FDA’s resource constraints and FSIS’ regulatory restrictions, the agencies’ ability to inspect plants on the basis of the risk they pose is limited. Specifically, FDA plans to inspect seafood plants once every 2 years, on average, regardless of their compliance history, and plants producing the highest-risk seafood once per year. While individual inspectors may visit noncompliant plants more frequently, other plants will not be inspected as a result because of limited inspection resources. Because FSIS is required by law to have continuous inspections of slaughtering plants and daily inspections of processing plants, the agency must continue its daily and carcass-by-carcass inspections. To take into account food safety risks in processing plants, FSIS plans to consider risk when scheduling the daily tasks that inspectors will perform. In addition, in slaughtering plants, FSIS plans to initiate pilot projects to explore other ways to perform its mandated carcass-by-carcass inspections with fewer resources. Unlike these other agencies, NMFS bases the frequency of inspections, for seafood plants participating in the voluntary HACCP-based inspection program, on the risk that they present. NMFS determines the riskiness of the plants as indicated by past inspections and the inherent risk associated with the product. As plants achieve and maintain compliance with NMFS’ standards, NMFS reduces the frequency of its inspections. The higher the plant’s NMFS rating for safety, the fewer inspections the plant receives and the lower the cost to the plant, since plants pay for the inspection. Plants with the best safety rating are inspected every 6 months, while plants with the lowest safety rating are inspected every 2 weeks. Plants that are not able to maintain compliance with NMFS’ program standards are dropped from the program or placed under daily inspection while deficiencies are being corrected. Of the about 300 seafood plants that participate in NMFS’ voluntary inspection programs, 88 seafood plants are under the HACCP program. These plants, like the approximately 4,800 total seafood plants, are also subject to FDA’s inspections. Appendix II provides a comparison of some aspects of NMFS’ and FDA’s seafood HACCP initiatives. We provided copies of a draft of this report to each of the 12 agencies for its review and comment. Seven of these agencies generally agreed with the information discussed, and provided clarifying comments and technical corrections, which we have incorporated into the report. Four agencies did not have any comments. FDA disagreed with our characterization of HACCP systems as a fundamental change for the agency. FDA officials, including the Strategic Manager for HACCP Policy, viewed the changes planned for the seafood inspection program as a continuation of historical efforts by the agency and cited their low-acid canned food program and their issuance of good manufacturing practices as examples. While we recognize that FDA’s approach to food safety has evolved over the years, we continue to believe that the move to HACCP represents a significant shift in FDA’s policy. We further believe that our characterization of this shift is consistent with FDA’s previous characterizations. In particular, in its HACCP rulemaking proposal, FDA stated that it was responding to the need for a “new paradigm” for seafood inspection, one that provides an ongoing, scientifically established system of intensive, preventative monitoring. We believe that taken in context, the HACCP-related changes being implemented by FSIS, FDA, and NMFS do represent a fundamental shift in the federal government’s approach to food safety. To obtain information on agencies’ responsibilities, funding, staffing, and workloads, we asked the 12 agencies involved in food safety to provide data similar to the 1989 data presented in our two-volume 1990 report. We did not verify the accuracy of these data. We also visited five seafood-processing plants that NMFS had identified to understand and observe how its user-fee, voluntary HACCP-based inspection program worked. In addition, we examined other reports and studies on meat, poultry, and seafood inspection and used prior GAO studies. We interviewed agency and industry officials and obtained additional data from FDA, FSIS, and NMFS concerning HACCP proposals, plans, and operations. We attended public meetings on FSIS’ HACCP proposal. We conducted our work at agencies’ headquarters in the Washington, D.C., area and in NMFS’ Western Inspection Region. We performed our work from July 1995 through March 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Senate Committee on Agriculture, Nutrition, and Forestry, and other appropriate congressional committees. We will also send copies of this report to the Secretaries of Agriculture, Commerce, Health and Human Services, and the Treasury; the Administrator, Environmental Protection Agency; and the Commissioner, Federal Trade Commission. We will also make copies available to others upon request. Please contact me at (202) 512-5138 if you or your staff have any questions. Major contributors to this report are listed in appendix III. Food and Drug Administration (FDA) is responsible for ensuring that domestic and imported food products (except meat, poultry, and processed egg products) are safe, wholesome, and properly labeled. The Federal Food, Drug, and Cosmetic Act, as amended, is the major law relating to FDA’s food safety and quality activities. The act also authorizes FDA to maintain surveillance of all animal drugs, feeds, and veterinary devices to ensure that drugs and feeds used in animals are safe, are properly labeled, and produce no human health hazards when used in food-producing animals. Food Safety and Inspection Service (FSIS) is responsible for ensuring that meat, poultry, and processed egg products moving in interstate and foreign commerce are safe, wholesome, and correctly marked, labeled, and packaged. FSIS carries out its meat and poultry inspection responsibilities under the Federal Meat Inspection Act, as amended, and the Poultry Products Inspection Act, as amended. Amendments to these acts require that meat inspected by state inspection programs and imported meat are to meet inspection standards “at least equal to” those of the federal program. Furthermore, the Department of Agriculture Reorganization Act of 1994 transferred to FSIS food safety inspections previously being performed by other organizations within the U.S. Department of Agriculture (USDA). Animal and Plant Health Inspection Service (APHIS) is responsible for ensuring the health and care of animals and plants. APHIS has no statutory authority for public health issues unless the concern to public health is also a concern to animal or plant health. APHIS identifies research and data needs and coordinates research programs designed to protect the animal industry against pathogens or diseases that are a risk to humans to improve food safety. Grain Inspection, Packers and Stockyards Administration (GIPSA) is responsible for sharing information with FDA concerning food safety and for ensuring the quality of grains for marketing. For example, GIPSA covers the inspecting of corn, sorghum, and rice for aflatoxin, which causes human illness. GIPSA caries out its responsibilities under the U.S. Grain Standards Act, as amended, and the Agricultural Marketing Act of 1946, as amended. Agricultural Marketing Service (AMS) is primarily responsible for establishing the standards of quality and condition and for grading the quality of dairy, egg, fruit, meat, poultry, seafood, and vegetable products. As part of this grading process, AMS considers safety factors, such as the cleanliness of the product. AMS carries out its wide array of programs to facilitate marketing under more than 30 statutes—for example, the Agricultural Marketing Agreement Act of 1937, as amended; the Agricultural Marketing Act of 1946, as amended; the Egg Products Inspection Act, as amended; the Export Apple and Pear Act, as amended; and the Export Grape and Plum Act, as amended. Agricultural Research Service (ARS) is responsible for conducting a wide range of research relating to USDA’s mission including food safety research. ARS carries out its programs under the Department of Agriculture Organic Act of 1862; the Research and Marketing Act of 1946, as amended; and the National Agricultural Research, Extension, and Teaching Policy Act of 1977, as amended. National Marine Fisheries Service (NMFS), within the Department of Commerce, conducts its voluntary seafood safety and quality inspection programs under the Agricultural Marketing Act of 1946, as amended, and the Fish and Wildlife Act of 1956, as amended. In addition to the inspection and certification services provided for fishery products for human consumption, NMFS also provides inspection and certification services for animal feeds and pet foods containing a fishery base. Environmental Protection Agency (EPA) is responsible for regulating all pesticide products sold or distributed in the country and setting maximum allowed residue levels—tolerances—for pesticides on food commodities and animal feed. EPA’s activities are conducted under the Federal Insecticide, Fungicide, and Rodenticide Act, as amended, and the Federal Food, Drug, and Cosmetic Act, as amended. Centers for Disease Control and Prevention (CDC) is charged with protecting the nation’s public health by providing leadership and direction in preventing and controlling diseases and responding to public health emergencies. CDC engages in public health activities related to food safety under the general authority of the Public Health Service Act, as amended. Federal Trade Commission (FTC) enforces the Federal Trade Commission Act, which prohibits unfair or deceptive acts or practices. FTC’s food safety objective is to prevent consumer deception through the misrepresentation of food. U.S. Customs Service (Customs) is responsible for collecting revenues and enforcing various customs and related laws. Customs assists FDA and FSIS in carrying out their regulatory role in food safety. Bureau of Alcohol, Tobacco and Firearms (ATF) is responsible for administering and enforcing laws covering the production (including safety), use, and distribution of alcoholic beverages under the Federal Alcohol Administration Act and the Internal Revenue Code. Critical control point: Any step in a process that, if not properly controlled, may result in an unacceptable safety, wholesomeness, or economic fraud risk. Critical control point: A point in a food process at which control can be applied and a food safety hazard can be prevented, eliminated, or reduced to acceptable levels. Process: One or more actions or operations to harvest, produce, store, handle, distribute, or sell a product or group of similar products. Processing: With respect to fish or fishery products, the handling, storing, preparing into different market forms, packing, labeling, or holding of a product. Firms that wish to participate in the program may apply orally or in writing. However, the applicant must submit a written HACCP plan, which must be reviewed and approved prior to validation. Every processor shall conduct a hazard analysis to determine whether food safety hazards are reasonably likely to occur and to identify preventive measures. If this analysis reveals one or more such hazards, the processor shall implement a written HACCP plan for each processing location and for each kind of fish and fishery product. Failure to have and implement a HACCP plan that complies with the requirements shall render the products adulterated. (1) Organization chart and narrative describing duties of personnel. (2) Description of fishery products. (3) Process flow charts. (4) Critical control point work sheet, including critical points, hazards, preventive measures, critical limits, monitoring procedures, corrective actions, and records. (5) Record-keeping system. (6) Verification procedures. (7) Sanitation standard operating procedures. (8) Consumer complaint file. (9) Recall procedures. (1) A list of the food safety hazards that are reasonably likely to occur and thus must be controlled for each fish and fishery product. (2) A list of the critical control points for each identified hazard. (3) List of the critical limits that must be met at each of the critical control points. (4) Procedures used to monitor each of the critical control points to ensure compliance with critical limits. (5) Any corrective action plans that have been developed to respond to deviations from critical control point limits. (6) List of the verification procedures and frequency of verification. (7) Record-keeping system to document monitoring of critical control points. On a fee basis, regional officials will review and approve a HACCP plan. When ready for validation, the plan is sent to the National HACCP Coordinator for final review and approval. One or more Consumer Safety Officers and inspectors will perform an on-site validation of the plan. The validation team will conduct the test after the firm has operated for at least 10 production days. FDA’s HACCP rule does not mention any requirement for prior FDA review and approval of a firm’s HACCP plan. FDA’s HACCP rule provides for an overall verification that the HACCP plan is being effectively implemented. (continued) Each facility must employ a NMFS-certified person knowledgeable of the HACCP program’s principles to be present during all processing times. Functions, such as developing a HACCP plan, reassessing and modifying the HACCP plan, and performing the record review, shall be performed by an individual who has successfully completed a standardized course of instruction recognized by FDA in the application of HACCP principles in the processing of fish and fishery products at a program of instruction approved by FDA. This trained individual need not be an employee of the processor. Different audit schedules exist for participating vessels, processors, and retail and food service firms. The audit schedules are on a sliding frequency scale: as performance improves, the frequency of audits decreases. Audits are unannounced. For processors, the frequency ranges from daily audits in plants that are temporarily out of compliance to audits every 6 months for a high level of proven compliance. The entry level in the HACCP program calls for audits every 2 weeks. FDA plans to review the seafood-processing plants about once every 2 years on average. These inspections have occurred nearly once per year, on average, for the highest-risk fish and seafood firms and about once every 3 to 4 years, on average, for low-risk firms. All of the plant’s records must be maintained by the firm for a period of 6 months beyond the expected shelf life of the product and must be accessible at all times to NMFS’ inspection personnel. Records required by the regulations shall be retained at the processing facility or importer’s business for at least 1 year after preparation for refrigerated products and 2 years for frozen, shelf-stable, or processed products. All records shall be available for official review and copying by FDA inspectors. NMFS has fees and charges to recover costs for administering the HACCP program. Fees are collected for preplan consultation, plan review and validation, inspections, and laboratory analysis of samples. Travel and per diem charges are added. FDA will not fund the additional work on HACCP’s compliance with user fees. Edward M. Zadjura, Assistant Director John M. Nicholson, Jr., Evaluator-in-Charge Dennis Richards Karla Springer Carol Herrnstadt Shulman The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the federal food safety system, focusing on recent federal initiatives to improve meat, poultry, and seafood safety. GAO found that: (1) the Food and Drug Administration (FDA) and the Food Safety and Inspection Service (FSIS) are primarily responsible for regulating food safety; (2) both agencies inspect meat, poultry, and seafood plants, but are constrained by resource limitations; (3) FDA plans to inspect each food processing plant once every 8 years, or once every 5 years when it can use state inspection resources; (4) Congress has increased the mandates of both agencies since 1989, including requiring FDA and FSIS to help develop and oversee new food labelling requirements; (5) while the agencies' budgets have increased, their staffing remained constant; (6) FDA, FSIS, and the National Marine Fisheries Service, which maintains a voluntary seafood inspection program, are implementing hazard analysis and critical control point (HACCP) programs, which emphasize the detection and prevention of microbial contamination and increase the role of industry in ensuring food safety; and (7) HACCP initiatives represent a fundamental shift in the government's approach to ensuring food safety.
Under the Mining Law of 1872 (30 U.S.C. 22 et seq.), United States citizens and businesses may freely prospect for hard rock minerals—such as gold, silver, lead, and copper—on most federal lands not specifically closed to mining. Although all mining claims must be filed with the Bureau of Land Management (BLM), each agency is responsible for the surface management of mining activities that take place on lands it manages. When mining operators or other responsible parties have previously failed to reclaim areas where mining operations have taken place on federal lands and are currently economically unable to do so, the burden of cleaning up these properties may fall upon the taxpayers. Regulations promulgated by BLM and the Forest Service in 1980 and 1974, respectively, require that once mining activities are completed, the mine operators must reclaim all areas disturbed by their operations as soon as possible. Furthermore, according to the Department of the Interior and the Forest Service, even before these regulations were promulgated, the operators were responsible for cleaning up their sites under state laws requiring the reclamation of such sites and under laws prohibiting the creation of nuisances. Mining operations that were ongoing when BLM’s and the Forest Service’s regulations were promulgated were allowed to continue, but they had to be brought into compliance with each agency’s surface management regulations. According to Department of the Interior and Forest Service officials, the Comprehensive Environmental Response, Compensation, and Liability Act (42 U.S.C. 9601 et seq.) imposes liability on mining operators for cleaning up abandoned mining operations that release hazardous substances on federal lands. National Park Service and Fish and Wildlife Service (FWS) lands have generally been withdrawn from mineral exploration. However, there are abandoned hard rock mine sites on these lands. Some are sites that preexisted the establishment or expansion of a park or wildlife refuge, and some are sites whose operators had valid existing rights when the lands were withdrawn from mining but have not reclaimed the sites. In addition to the land-managing agencies, two other agencies within the Department of the Interior—the Bureau of Mines and U.S. Geological Survey (USGS)—have addressed the issue of abandoned hard rock mines. The Bureau of Mines is concerned with mineral production and environmental remediation technologies. USGS assesses mineral resources and mining-related environmental problems. Attempts to determine how many hard rock mines lie abandoned nationwide have not resulted in a definitive inventory of these mines on federal lands. The four major land-managing agencies are in various stages of inventorying the abandoned mines on the lands they manage. Other organizations, such as the Bureau of Mines and the Mineral Policy Center,have also attempted to estimate the number of sites. However, because these sites are defined and counted differently, the individual results cannot be meaningfully combined or compared. BLM, which began an inventory in 1994, has made no overall estimate of the number of abandoned mine sites. BLM’s Nevada and Utah state offices are piloting the agency’s inventory approach, and several other state offices, including Colorado and Montana, have also begun field inventories, with the following results: The Nevada state office estimates 400,000 mine openings, structures, and other individual components of mining operations statewide, regardless of who owns the land. As the inventory progresses, it will differentiate between federal and other lands. The Utah state office is working with the state of Utah to inventory sites. On the basis of information from the state of Utah’s Abandoned Mine Reclamation Program and some fieldwork, the estimated number of sites in Utah is 17,000 to 20,000 on public and private lands. The Colorado state office, which expects to complete its portion of the inventory in 1996, is identifying a smaller number of sites on federal lands than it expected. While officials initially expected to find as many as 15,000 sites on federal lands in Colorado, field staff have found that few of the mines are actually located on BLM-managed lands. The Montana state office, working in cooperation with the Montana Bureau of Mines and Geology, has identified about 1,000 sites on BLM-managed lands in that state. The National Park Service, in an effort begun in 1984, has counted the number of abandoned hard rock mines in almost all of its units—99 percent, according to officials—except for some in Alaska and the 3.1 million acres over which it acquired jurisdiction as a result of the California Desert Protection Act of 1994 (P.L. 103-433). The agency has tallied 2,500 sites, but the field personnel responsible for the inventory defined sites in different ways. Although the National Park Service defines a “site” as a “particular operation . . . or area where mining occurred, which may . . . multiple ‘openings,’ i.e., shafts, adits, inclines, pits, prospects, etc.,” the agency’s units defined sites in different ways, according to officials. For example, one unit defined a site as a grouping of mining-related features; others designated individual features, such as a single mine opening, as one site. According to FWS, the agency’s wildlife refuges contain approximately 240 abandoned hard rock mine sites. FWS obtained this information on the number of sites by reviewing its mining files and requesting confirmation from its field offices. FWS does not consider abandoned hard rock mines a major problem on its refuges. According to the Forest Service, there are about 25,000 sites within National Forest boundaries. The Forest Service identified these sites using aerial photography and fieldwork, and the data were compiled by the U.S. Department of Agriculture’s Office of the Inspector General through a questionnaire. The Forest Service is attempting to more precisely screen the sites in individual forests and expects to complete this effort in 1997. The Bureau of Mines and USGS have estimated the total number of abandoned hard rock mines on federal lands. However, these estimates cannot be meaningfully compared with any of the other estimates because they vary in scope and in the types of data used for the estimates. The Bureau of Mines estimates that there are 15,300 sites on the lands managed by the agencies within the Department of the Interior and 12,500 sites on the lands managed by the Forest Service. These estimates are based on information in the Minerals Availability System/Minerals Industry Location System, a computerized database containing information about the location of and past activities at over 200,000 mineral deposits. However, these data were collected for purposes other than inventorying abandoned mines, and although they identify areas where mining occurred, they do not account for all mine sites and features. As a result, according to a National Park Service report and BLM officials, these data require further confirmation to ensure their accuracy. Although USGS has not independently inventoried abandoned mine sites, it compiled data from the land-managing agencies in response to a congressional request for information about sites containing hazardous materials on the lands managed by the Department of the Interior. Using the assumption that all abandoned hard rock mines are potentially contaminated, USGS estimated that, as of July 1994, there were approximately 88,000 sites on the lands managed by agencies within the Department of the Interior. USGS obtained these data from the agencies, with the exception of BLM. For the lands managed by that agency, USGS made estimates from data included in a 1991 report by the Western Governors’ Association entitled Inactive and Abandoned Noncoal Mines—A Scoping Study. The 1991 report of the Western Governors’ Association reported data obtained from 33 states on abandoned and inactive hard rock mines. However, the report cautioned that “The findings presented are not comparable among states because of variability in the definitions . . . used by states, and variability in the type and quality of data available to states. Neither the number of sites, nor the cost of remediation, reported by individual states can be totalled to present a consistent national total.” The Western Governors’ Association, in an effort funded by the Bureau of Mines, is working with state and federal agencies and private organizations to recommend consistent terminology and guidelines that would aid in future inventories. In a June 1993 report, the Mineral Policy Center estimated that there were about 560,000 mine sites on public and private lands. This estimate was also based upon data reported by the Western Governors’ Association, supplemented with interviews and documents from state officials and discussions with private contractors and consultants. The problems posed by abandoned hard rock mines can generally be classified as physical safety hazards or environmental degradation. Physical safety hazards, which can lead to human injury or death, may include concealed shafts or pits, unsafe structures, and explosives. Conditions causing environmental degradation may include drainage of toxic or acidic water, which could result in soil and groundwater contamination or biological impacts. However, because not all of the agencies have completed their inventories, they have not conducted the necessary fieldwork to identify how many mine sites with problems of each type are on the lands they manage. Furthermore, the factors the agencies use to classify their inventories are not consistent from agency to agency. According to BLM’s guidance on the inventory, as sites are identified they should be placed in categories according to the presence or potential for safety or environmental hazards, as well as reclamation needs. BLM also has a basic ranking system, but the agency has not yet compared the rankings across state or field offices. BLM’s inventory in Nevada found extensive safety hazards and confirmed that most of the chemically hazardous sites are already known. The current focus of BLM’s Montana state office is on approximately 100 sites that are affecting water quality. The National Park Service classifies sites according to the type and degree of hazard they present. Each site that will require reclamation is ranked on the basis of its (1) degree of hazard, (2) degree of impact on the environment, and (3) accessibility. The weight applied to these criteria is flexible and varies according to the relevant program’s emphasis. According to the National Park Service’s Associate Director for Natural Resources, Stewardship, and Science, the agency has a basic knowledge of the hazards at every identified abandoned mine site. The 2,500 identified sites include nearly 7,700 hazardous openings, and the National Park Service estimates that 5 to 10 percent of all the sites pose an environmental threat, such as the impairment of water quality. FWS program officials say that there are no known hazardous sites with abandoned mines on wildlife refuges. FWS has not categorized its sites any further. The Forest Service is classifying its sites according to the existing and potential environmental degradation, identifying sites according to whether they may degrade water quality or other natural resources or contain hazardous materials. According to a March 1993 report by the Forest Service, over 1,500 western mining sites with significant problems of acid drainage have been identified on the lands in the National Forest System. A hazardous material specialist with the Forest Service said that approximately 10 percent of the abandoned mine sites on the lands managed by that agency have a high potential to be hazardous waste sites. The Bureau of Mines and USGS have both focused on environmental effects in classifying the sites. However, their sources of data are different, and the data were compiled for different purposes. Both agencies are working with an interdepartmental task force, in which the four land-managing agencies are also involved, that has proposed addressing the effects of abandoned hard rock mines throughout watersheds, rather than site by site. The Bureau of Mines used data based on the mines’ past production. On the basis of a study of sites in one national forest, the Bureau of Mines has suggested that approximately 2 percent of abandoned hard rock sites might need detailed assessments; a smaller number would need environmental remediation. USGS collected data from individual agencies, which, as noted earlier, may have different methods and strategies for classifying sites. The Western Governors’ Association and the Mineral Policy Center also attempted to categorize abandoned hard rock mine sites according to their hazards. However, as with the inventory estimates, they reported the data differently. The states provided data for the report by the Western Governors’ Association on the types of hazards associated with abandoned hard rock mines, but they did not all report in the same way. For example, Montana reported the numbers of sites, disturbed acres, mine openings, acres of mine dumps, mill sites, smelters, miles of polluted water, and hazardous structures. In contrast, Nevada reported the number of sites, disturbed acres, and mine openings, without the additional detail. In its June 1993 report, the Mineral Policy Center classified all abandoned hard rock mine sites into six types, ranging from “benign” to “Superfund.” This classification was based on information in the report of the Western Governors’ Association and on follow-up with the states and the Environmental Protection Agency. Specifically, the Mineral Policy Center classified the sites as follows: 194,500 were benign, needing little if any remediation; 231,900 needed revegetation or landscaping; 116,300 presented safety hazards needing prompt but not necessarily extensive action; 14,400 needed extensive work to prevent surface water contamination; 500 needed complex work to prevent groundwater contamination; and 50 were Superfund sites, posing a severe threat to the public and needing complex cleanup. No nationwide cost estimate for reclaiming abandoned hard rock mines on federal lands is available. Preparing accurate estimates of the reclamation costs requires detailed assessments, or characterizations, of the sites, involving physical inspection and in-depth evaluation of the problems at each site. These studies are costly because the estimates can involve complex hydrology and chemistry of soil and water. Historic preservation and protection of endangered species can also affect reclamation costs. The agencies have completed a few such detailed site analyses. An estimate of the total cost to reclaim BLM lands is not available because the agency’s inventory is not yet complete. However, according to BLM geologists, (1) costs will vary among the states depending upon the type of reclamation required and (2) the costs to clean up environmental damage are much higher than the costs to alleviate physical safety hazards. For example, the costs will be different in Colorado and Montana, where BLM officials are concerned about how the sites are affecting water quality, than in a more arid state such as Nevada. In Nevada, where water quality is less likely to be affected, BLM officials are focusing more on public safety because of the proximity of abandoned mine sites to population centers. The National Park Service estimates that the cost to reclaim the abandoned mine sites on the lands it currently manages will total about $165 million. These costs include about $40 million for short-term, or urgent, needs. However, these estimates do not include all the National Park Service’s lands in Alaska or the 3.1 million acres over which it recently acquired jurisdiction in the California desert. The estimates are based on the National Park Service’s experience in reclaiming abandoned mine sites and mitigating their effects. Although FWS has not estimated reclamation costs, the small number of abandoned mines at most of the refuges are not considered a significant problem and are not known to be hazardous, according to agency officials. The Forest Service estimates the total cost to reclaim the abandoned mine sites on the federal and private lands within National Forest boundaries to be about $4.7 billion. This estimate includes $2.5 billion to clean up approximately 2,500 sites with hazardous waste and restore the natural resources at these sites, and an additional $2.2 billion to restore water quality and address safety problems at the remaining 22,500 sites. The Forest Service still needs to complete preliminary site investigations to rank the sites for more detailed analysis, officials said. These detailed site assessments will give the Forest Service the information it needs to prepare more accurate cost estimates. The Bureau of Mines estimated the “worst-case” cost of reclaiming abandoned mine sites on federal lands at between $4 billion and $35.3 billion. However, this estimate was based upon the assumption that as many as 10,450 sites would require reclamation, while Bureau of Mines officials expect the actual number of sites that would be reclaimed to be far smaller. USGS has not estimated reclamation costs. In a September 1991 report, the Department of the Interior’s Office of Inspector General estimated that it would cost about $11 billion to reclaim the “known universe” of all abandoned noncoal mine sites (not just those on federally managed lands). This estimate was based upon the Bureau of Mines’ estimate of the extent of damage rather than on the number and type of abandoned hard rock mine sites. The report did not include an estimate of the number of sites, nor did it classify the sites by the type of hazard they present. In most cases, the states reporting to the Western Governors’ Association estimated the cost of reclaiming sites. However, not all the states reported such estimates, and those that did so reported statewide estimates without regard to whether the lands were publicly or privately owned. The Mineral Policy Center has projected the total cost of cleaning up all abandoned hard rock mines (not just those on federal lands) to be from $33 billion to $72 billion. This estimate was based on data contained in the Western Governors’ Association’s report and on follow-up discussions with the participating states and with the Environmental Protection Agency. We requested comments on a draft of this report from the Secretary of the Interior and the Chief of the Forest Service or their designees. We met with and obtained comments from officials from the Department of the Interior’s Office of the Solicitor, BLM, National Park Service, FWS, USGS, and Office of Policy Analysis and with officials from the U.S. Department of Agriculture’s Forest Service and Office of General Counsel. These officials generally agreed with the factual information presented in this report. Officials from several of the agencies provided technical clarifications, which we have incorporated as appropriate. Officials from the Department of the Interior asked that we recognize their concern that a comprehensive inventory could be mandated. According to these officials, such an inventory would be costly and take efforts away from remediation. In this regard, officials from Interior’s agencies noted that the interagency approach of targeting remediation throughout a watershed towards those water bodies impaired by drainage from the abandoned mines would be more cost-effective and worthwhile than a comprehensive inventory of individual mine sites on federal lands. Interior and Forest Service officials noted that environmental problems on federal lands often result from abandoned hard rock mines on private lands located within those federally managed lands. Because the purpose of our report was to provide information on the number of abandoned mines on federal lands, the hazards these mines pose, and the cost to reclaim them, we did not evaluate the agencies’ specific approaches to inventorying or remediating these mine sites, nor did we address other issues affecting federal lands. In conducting our review, we examined relevant reports and other documents prepared by the four principal land-managing agencies we reviewed within the departments of the Interior and Agriculture. We also interviewed program managers from these organizations in Washington, D.C., and in regional, state, and local offices, as appropriate. In addition, we reviewed reports by Interior’s Office of Inspector General, the Western Governors’ Association, and the Mineral Policy Center. A full description of our scope and methodology is included in appendix II. We conducted our review from May 1995 through January 1996 in accordance with generally accepted government auditing standards. As requested, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days after the date of this letter. At that time, we will send copies to appropriate congressional committees and federal agencies and to other interested parties. We will also make copies available to others on request. Please call me at (202) 512-3841 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix III. Degree of hazard, degree of environmental impact, accessibility $165 million (about $40 million short-term) Six categories ranging from “benign” to “Superfund” The Ranking Minority Member, House Committee on Resources, asked us to report on the (1) approximate number of abandoned hard rock mines on federally managed land, (2) types of hazards these mines pose, and (3) approximate cost to reclaim these mines. To determine the approximate number of such mines on federally managed lands, we obtained the available inventory information from program managers in the Department of the Interior and the U.S. Department of Agriculture. We focused on the Department of the Interior’s Bureau of Land Management (BLM), National Park Service, and Fish and Wildlife Service (FWS) and on the U.S. Department of Agriculture’s Forest Service because they manage 623 million acres, or about 95 percent of the federal lands in the United States. To ascertain the types of hazards these abandoned mines pose, we reviewed the agencies’ documents and interviewed program managers in the two departments. To obtain estimates of the costs to reclaim these mines, we interviewed program managers and obtained any estimates that had already been prepared by the agencies in both departments. We also interviewed officials from the U.S. Department of Agriculture’s Office of the Inspector General. We reviewed relevant documents and interviewed program managers in the departments of the Interior and Agriculture. At the Department of the Interior, we met with officials from the three key land-managing agencies: BLM, the National Park Service, and FWS. We also met with program officials from the Bureau of Mines and the U.S. Geological Survey (USGS), and the Office of Inspector General. At the U.S. Department of Agriculture, we met with program officials from the Forest Service and Office of the Inspector General. We also interviewed representatives of the Western Governors’ Association and the Mineral Policy Center, and reviewed their reports. We did not evaluate the agencies’ or other organizations’ inventory or cost-estimation methodologies. In addition, we reviewed three audit reports issued by the Department of the Interior’s Inspector General. At the time of our review, the U.S. Department of Agriculture’s Inspector General was validating the Forest Service’s inventory of abandoned hard rock mines. The Inspector General’s report had not been issued at the time of this report. Sue E. Naiberk, Assistant Director David E. Flores, Evaluator-in-Charge Jennifer L. Duncan, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on abandoned hard rock mines on federal lands, focusing on the: (1) approximate number of such mines; (2) types of hazards the mines pose; and (3) approximate cost to reclaim the mines. GAO found that: (1) the four major federal land managing agencies are each taking inventory of the abandoned mines on the lands they manage, but because the agencies do not use consistent methodologies to develop their estimates, there is no definitive inventory available; (2) the Forest Service has estimated of the number of abandoned mines on federal lands to be up to 25,000 sites; (3) nonfederal entities are also working to standardize terminology and guidelines to aid in future inventories; (4) abandoned hard rock mines can pose physical safety hazards, cause environmental degradation, and contaminate water; (5) the agencies use different factors to classify their sites for risk, and only two of the four agencies rank the severity of hazards; (6) nonfederal organizations have determined that 194,500 sites were generally safe, while 231,900 needed landscaping, 116,300 presented minor safety hazards, 14,900 could cause water contamination, and 50 threatened public safety and required complex cleanup; (7) the agencies have not completed the fieldwork needed to identify the number and types of problems on their sites; and (8) the Bureau of Mines believes that worst-case scenario costs could range between $4 billion and $35.3 billion and nonfederal organizations estimate that costs could exceed $70 billion, but no comprehensive cost estimate for reclaiming abandoned hard rock mines on federal lands exists.
FFATA as amended by the Government Funding Transparency Act of 2008, gave OMB responsibility for ensuring the existence and operation of a website that captures specific information on federal awards (e.g. contracts, loans and grants). The website is to promote transparency in government spending by providing the public with the ability to track where and how federal funds are spent. Agenciesspecific information on awards made in fiscal year 2007 and later along with other relevant information specified by OMB, and awards were to be added to the site within 30 days after the award was made. Further, agency financial award reporting is limited by several exemptions provided by FFATA and OMB guidance, including exemptions for reporting of transactions made to individuals, those under $25,000, and any containing classified information. Table 1 lists and describes the required data elements. The information displayed on USASpending.gov is derived from several sources: Contract data are imported from the Federal Procurement Data System-Next Generation (FPDS-NG), which collects information on contract actions. GSA, with guidance from OMB’s Office of Federal Procurement Policy, established and administers this system. Since 1980, FPDS-NG and its predecessor have been the primary contracting databases used government-wide. Federal agencies are responsible for ensuring that the information reported in this database is complete and accurate. Additionally, FPDS-NG pre-populates core vendor data using the System for Award Management (SAM). SAM is the primary database for information on potential government business partners in which those wishing to do business with the federal government must register. Data on financial assistance awards (e.g., grants) are received from reports submitted by agencies in a file format called Federal Assistance Award Data System PLUS (FAADS+). To report information on financial assistance awards to USASpending.gov, OMB guidance requires agencies to submit FAADS+ files directly through the USASpending.gov web submission tool, the Data Submission and Validation Tool, maintained and operated by GSA. The Data Submission and Validation Tool applies data validation checks to 32 data fields. For example, the recipient name must be non-blank and must contain the value “MULTIPLE RECIPIENTS” for awards aggregated at the county level. The tool will reject individual records that contain an error or entire submissions if more than 10 percent of the records in the submission contain errors. The FFATA Subaward Reporting System (FSRS) provides data on first-tier subawards. and/or subawardee executive compensation information through FSRS. The subawardee provides to the prime awardee all information required for such reporting. This includes subawardee entity information, subawardee unique identifier and relevant executive compensation data, if applicable. Additionally, FSRS pre-populates data using FPDS-NG, SAM, and USASpending.gov, where applicable. OMB, Memorandum for Senior Accountable Officials: Open Government Directive – Federal Spending Transparency and Subaward and Compensation Data Reporting (Washington, D.C.: Aug. 27, 2010). In addition, USASpending.gov uses other data sources to supplement and validate agency-supplied data. These sources include: CFDA—CFDA is a government-wide compendium of federal programs, projects, services, and activities that provide assistance or benefits to the American public, and is the authoritative source for CFDA program numbers. This database is used to validate CFDA numbers provided by agencies. Dun and Bradstreet—This commercial entity maintains a repository of unique identifiers and is used to validate unique identifiers provided by agencies and to acquire the parent entity unique identifier based on linkage information at the time of the award. SAM—Information in SAM is used to pre-populate specific contractual information contained in FPDS-NG based on the unique identifier submitted by an agency. Among other things, SAM populates the entity name and address (street, city, state, congressional district, zip code, and country). If a subawardee is registered in SAM, executive compensation and other subawardee information is pre-populated in FSRS prior to the prime awardee’s reporting. OMB issued a series of guidance documents to facilitate accurate reporting of information to the website. For example, OMB guidance requires agencies to increase efforts to improve data quality by outlining high-level reporting requirements, focusing primarily on validation of unique identifiers for recipients, collection of program source data, and formatting of assistance data prior to submission. OMB guidance further requires agencies to designate a high-level senior official to be accountable for information on USASpending.gov and requires agencies to submit a quarterly report on their progress toward improving data quality. In June 2013, OMB issued a memorandum requiring each federal agency to assign financial assistance award identification numbers unique within the federal agency and to identify and implement a process to compare and validate website funding information with data in the agency’s financial system. Based on this validation process, each agency is to report to OMB the accuracy rate of its website data on a quarterly basis. OMB guidance also outlines the reporting requirements on first-tier subawards and executive compensation of awardees, focusing on reporting requirements related to federal grants and cooperative agreements. Detailed reporting requirements related to federal contracts are located in the Federal Acquisition Regulation. Among the specific requirements described in OMB’s guidance, prime awardees are to report first-tier subawards and executive compensation associated with new federal contracts and grants as of October 1, 2010, and register in SAM. For contracts, agencies shall include these requirements in all new contracts. For assistance awards, agencies shall include these requirements in each program announcement, regulation, or other issuance containing instructions for the applicant. The agency is not required to review data for which it would not normally have supporting information, such as the executive compensation information. To provide maximum transparency to the public in the use of all federal funds, OMB directs agencies to report awards that would otherwise be exempt from reporting–specifically, those under $25,000 and awards to individuals—as an aggregated amount, where possible. An aggregated record is created by taking a group of similar records and tallying the dollars based on a specific set of data fields in order to create one summary, or aggregated, record. Aggregated records are to be reported as follows: Transactions under $25,000—Agencies should submit information under $25,000 at a transaction level. If necessary, agencies may report these as county-level aggregate amounts. Payments to individuals—Agencies should not report payments to individuals at a transaction level, to protect their privacy. All payments to individuals should be reported as an aggregated amount. The transaction information submitted in FAADS+ files usually contains data in each field, except where the submitting agency is unable to provide those data. As a general principle, OMB guidance calls for agencies to provide as much information as possible on their aggregate records. However, OMB guidance allows the exclusion of certain data elements, since data for numerous transactions are summarized into one record. Prior to 2014, GSA was responsible for operating and maintaining the USASpending.gov website in accordance with OMB guidance. In February 2014 OMB announced the transfer of those responsibilities to Treasury’s Fiscal Service. Treasury also received funding in the Consolidated Appropriations Act 2014, for website maintenance, improvements to the site’s functionality and usability, and data standardization efforts. As of May 2014, Treasury had not yet taken on operational responsibility of the website. According to the Assistant Commissioner for Government- wide Accounting at the Fiscal Service, Treasury has begun developing a plan for transferring operational responsibility from GSA to Treasury. In prior audits, we have looked at several areas which impact the reliability of USASpending.gov data. Since 2003, we have issued several reports and offered testimony on data reliability issues associated with FPDS-NG and its predecessor, FPDS. Our reviews of contract award data in these systems have revealed inaccurate and incomplete reporting. To help improve data reliability in FPDS-NG, we recommended that OMB work with agencies to implement systems for contract writing that connect directly to FPDS-NG and provide confirmation of agencies’ review and verification of the accuracy and completeness of their data in FPDS-NG. We also recommended that OMB develop a plan to improve the system’s ease of use and access to data for government-wide reporting needs. In response to our recommendations for improving the accuracy and timeliness of contract award data, OMB issued a memorandum in August 2004 directing agencies to ensure that their contract writing systems could electronically transfer information directly to FPDS-NG. In March 2007, OMB issued a memorandum requiring agencies to regularly certify the accuracy and completeness of its information to GSA. In November 2007, May 2008, and June 2009, OMB issued additional guidance to agencies that addressed improvements in data quality. GAO, Electronic Government: Implementation of the Federal Funding Accountability and Transparency Act of 2006, GAO-10-365 (Washington, D.C.; Mar. 12, 2010). USASpending.gov website, we recommended, among other things, that the Director of OMB revise guidance to federal agencies on reporting federal awards to clarify the requirement that award titles describe the award’s purpose and requirements for validating and documenting agency award data submitted by federal agencies; and include information on the city where work is performed in OMB’s public reporting of the completeness of agency data submissions. Although OMB generally agreed with our findings and recommendations, it has not yet implemented our recommendations to clarify guidance or include performance information. GAO, Federal Data Transparency: Opportunities Remain to Incorporate Lessons Learned as Availability of Spending Data Increases, GAO-13-758 (Washington, D.C.; Sept. 12, 2013). with the Government Accountability and Transparency Board, has begun taking steps to obtain stakeholder input as improvements to collecting spending data are developed. Although agencies generally reported information for contracts to USASpending.gov, they did not properly report information on assistance awards, totaling nearly $619 billion. With few exceptions, FFATA and supplemental OMB guidance require agencies to report spending information on federal awards to USASpending.gov. We found that agencies generally reported information on contracts but did not report timely assistance information, leading to underreporting of nearly $619 billion for awards made in fiscal year 2012. Many of those awards were subsequently reported by agencies. One agency, the Millennium Challenge Corporation, reported an ongoing inability to report its awards because its recipients are foreign governments or non-governmental organizations. However, OMB’s guidance describes how to report foreign In June 2013, OMB assistance and other agencies report such awards.issued a memorandum directing agencies to establish procedures to ensure that financial data reported on USASpending.gov is consistent with agency financial records. If properly implemented, these procedures could better ensure that agencies fully report future assistance awards. According to FFATA and OMB guidance, information on contracts made with funds appropriated in fiscal year 2007 and later is to be reported by agencies and made available on USASpending.gov. Of the 37 agencies with budget authority of at least $400 million each in fiscal year 2012, the website includes information on at least one contract awarded by 33 of the agencies. The USASpending.gov website states that expenditures made with non- appropriated funds are not to be reported. Of the four agencies that did not report contractual information, officials from three stated that their contracts are awarded using funds that are available outside of annual appropriations and therefore considered to be non-appropriated and exempt from reporting. However, agency funding can be appropriated by means other than through an annual appropriations act. For example, a permanent appropriation makes funds available on the basis of previously enacted legislation, such as statutory authorization for self-financing through offsetting collections. Each of these three agencies receives a form of appropriation other than through an annual appropriations act, and this is used to fund their contract awards. Because the USASpending.gov website does not define what constitutes non- appropriated funds, it is unclear whether agencies making awards using funds received on the basis of an appropriation other than an annual appropriation are required to report. The fourth agency—the Central Intelligence Agency—stated that it did not report contracts due to concerns about the ability of someone to use them to infer agency requirements. Specifically, a liaison official from the Central Intelligence Agency stated that it does not report information on classified contracts to USASpending.gov. This practice is consistent with FFATA, which does not require the disclosure of classified information. The official added that the agency also does not report unclassified contract information because of the risk that an individual could use it, along with other publicly available information to develop a picture of Central Intelligence Agency requirements (e.g., key information about the agency). Therefore, the Central Intelligence Agency restricts the release of both classified and unclassified information, including information on contracts, in order to protect intelligence sources and methods. However, OMB’s guidance does not clearly exempt agencies from reporting unclassified contract information that they believe must not be disclosed. Without clear OMB guidance to define the type of appropriated funds exempt from reporting or how to report information on unclassified awards that raise concerns related to intelligence operations, it is unclear whether justifications from each of the four agencies for not reporting its contracts are appropriate. As with contracts, information on assistance awards made using funds appropriated in fiscal year 2007 and later is to be reported by agencies and made available on USASpending.gov. While agencies are to report the amount of federal funding awarded for most types of assistance programs, OMB guidance requires that agencies report a face value and long-term subsidy costs for loan programs. OMB also calls for agencies to report awards under $25,000 and awards to individuals as an aggregated amount, where possible. Among fiscal year 2012 awards reported on USASpending.gov, most contained information that was not fully consistent with agency records or was unverifiable due to gaps in agency records. Award data displayed on the website consists, in part, of 21 data elements required by FFATA. According to OMB guidance, the goal of the website was to have 100 percent of award data be accurate by the end of fourth quarter for fiscal year 2011, meaning the information is the same as (consistent with) the information contained in agency records or other authoritative sources. However, the USASpending.gov data for fiscal year 2012 infrequently approached OMB’s goals for data quality. Specifically, of our sample of 385 awards, 4 percent contained information that was fully consistent with agency records for all 21 data elements. Projecting to the entire set of fiscal year 2012 awards, we estimate that only 2 percent to 7 percent of awards contain information that is fully consistent with agency records for all 21 data elements, and thus is also consistent with OMB’s goal. Further, across all awards, only one of the data elements we tested met our criterion for significant consistency with agency records, while eight did not. We considered a data element to be significantly consistent if we estimated that for at least 90% of fiscal year 2012 awards, the information displayed on USASpending.gov were consistent with the information in the awarding agency’s records. Conversely, we considered an element significantly inconsistent if we estimated that in more than 10 percent of awards, the USASpending.gov information for that element and the We could not underlying agency records were not in agreement. determine whether the remaining data elements were significantly consistent or inconsistent, in large part because of incomplete or inadequate agency records. We define a variable as “significantly” inconsistent for USASpending.gov if the lower bound of the 95 percent confidence interval for the estimated percent of records with inconsistent data is greater than 10 percent. We define a variable as “significantly” consistent for USASpending.gov if the lower bound of the 95 percent confidence interval for the estimated percent of records with consistent data is greater than 90 percent. was almost always consistent with the underlying information in agency records. While only one data element contained information that was significantly consistent with agency records, eight were significantly inconsistent for fiscal year 2012 awards, with estimated rates on inconsistency of at least 10 percent (see table 4). Award title descriptive of the purpose of each funding action: The title contained inconsistent information for 24 percent through 33 percent of awards. Of these, most were inconsistent with FFATA’s requirement that an award title be descriptive of the purpose of the award. Instead, many titles consisted of shorthand descriptions, acronyms, or other language that did not convey intent for the funding. For example, according to an official in the Defense Procurement and Acquisition Policy group, personnel often use terminology that is only understood by other agency officials, without considering whether it can be understood by the general public. For example, the award title for one award at the Department of Defense was reported as “Cca.” The associated delivery order indicated that the purpose of the award was to analyze the repair cost of several items including “Cca,” which refers to a “circuit card assembly,” and comprises less than a quarter of the total cost of the award. Recipient street and zip code: The recipient street element contained inconsistent information for 13 percent through 21 percent of awards; and the recipient zip code was inconsistent for 10 percent through 18 percent. For example, for an Air Force award to BAE Systems Technology Solutions & Services Inc., the vendor’s address was reported on USASpending.gov as 1601 Research Blvd in zip code 20850-3173, while the contract documentation showed the vendor to be located at 520 Gaither Road in zip code 20850-6198. In another example, a Department of Education award issued to one component of the SLM Corporation was reported on USASpending.gov with an address for a different component of the corporation. Recipient congressional district: The recipient congressional district contained inconsistent information for 12 percent through 19 percent of awards. This included cases in which the website displayed an incorrect congressional district, omitted the district, or erroneously displayed a district for a foreign awardee. Principal place of performance city: The city of performance contained inconsistent information for 22 percent through 31 percent of awards. This included cases in which agencies reported the recipient’s business address as the place of performance, even when the work funded by an award was performed in another location. In other cases, agencies reported a county instead of the specific city for some awards. For example, several awards from the Federal Highway Administration reported the county name on USASpending.gov instead of the city as the primary locator. Principal place of performance congressional district: The congressional district of performance contained inconsistent information for 20 percent through 28 percent of awards. Many of these inconsistencies were because the website displayed “zz” for this element. Principal place of performance country: The information on the country of performance contained inconsistent information for 29 percent through 38 percent of awards. Nearly all of these cases were assistance awards aggregated at the county level, for which the field was reported by the awarding agency but not displayed on the website. Unique identifier for the parent entity: The Parent DUNS number, used to uniquely identify parent entities (if applicable) doing business with the government, contained inconsistent information for 26 percent through 35 percent of awards. DUNS numbers are provided to the government through a contract with Dun and Bradstreet, a commercial entity. Nearly all of these cases were assistance awards, for which the field is not displayed on the website. Report formatting for the website and the existing contract with Dun and Bradstreet along with an unclear reporting requirement and incomplete oversight of agency reporting processes contributed to many of the inconsistencies. Specifically: Display format on USASpending.gov: Inconsistencies in the recipient congressional district and place of performance congressional district and country fields are partly attributable to the website either not displaying the information or improperly displaying it. Officials from the Integrated Award Environment program management office at GSA stated that the principal place of performance country was not displayed on the website because the report format was missing that field. Further, officials stated that the recipient congressional district displayed incorrect information in some cases due to a logic error on the website. As of April 2014, GSA made technical changes to the website which addressed these issues. Access to vendor-provided data: GSA officials stated that the parent unique identifier was not displayed on the website for assistance awards because its contract with the provider of that information, Dun and Bradstreet, only covers publishing the data for contract awards. According to these officials, GSA plans to add this data in an upcoming service pack in 2014. Lack of clear guidance: Multiple agency officials attributed nondescriptive award titles to a lack of guidance to define what constitutes a fully descriptive award title. We identified this as an issue in our March 2010 report, in which we recommended that OMB provide clarifying guidance on this reporting requirement. As of April 2014, OMB officials stated that the office still concurs with this recommendation. However, OMB has not yet provided clarifying guidance. Unless each award has a descriptive title clearly identifying the purpose of the award, the public may not be able to determine why the award was made and taxpayer dollars spent. Incomplete oversight of agency reporting processes: Agency reporting processes for which there is incomplete oversight to identify and correct mistakes contributed to several of the significantly inconsistent data elements. For example, multiple agency officials attributed inconsistent recipient congressional district information to a system lookup that automatically generates this information based on a zip code. They stated that inconsistencies were caused when the zip code was incomplete or incorrect, or when the lookup table was out-of-date. Further, for the recipient street address and zip code, inconsistencies occurred, in part, because agencies did not update FPDS-NG with the correct information. For a new contract, FPDS-NG relies on the recipient’s unique identifier to automatically populate the recipient’s address from SAM, but for a modification to an existing contract, the system uses the address from the base award unless the agency submits an updated address. Meanwhile, the system for entering information on assistance awards does not have the capability to automatically generate address information, so OMB directed agencies to verify recipients’ addresses against Dun and Bradstreet’s information, and enter a confidence code, provided by Dun and Bradstreet, to confirm the information had been verified. However, we estimate that agencies submitted a confidence code that meets OMB’s threshold for only about a thirdtransactional assistance awards, and the validation rules applied by the website do not validate these elements except to ensure that they are non-blank and, in the case of the zip code, contain five digits. Lastly, inconsistencies in the principal place of performance information were caused, in part, by agency processes that resulted in the wrong data being reported. For example, officials from the Federal Highway Administration stated that the agency reported the county instead of the city because the Federal Aid Highway Program mandates the use of county names. In another example, the Chief of the Payment Management Office at the Department of Agriculture stated that its award tracking system is programmed to use the business address if a specific principal place of performance address is not provided. However, these practices are inconsistent with FFATA’s requirement that agencies report the city, state, congressional district, and country of performance for each award. Based on relevant guidance, if work occurs across multiple cities, the agency should report the city in which the majority of the work took place. Because there was no oversight to verify that the agencies were following this guidance, each continued to report information that was inconsistent with the guidance. Weaknesses in OMB’s guidance on data validation likely contributed to these problems remaining unaddressed. We identified this as an issue in our March 2010 report, in which we stated that OMB’s guidance specified that agency data submissions are to be validated by an appropriate official but did not identify how or by whom. In that report, we recommended that OMB provide guidance to agencies to clarify the requirement for validating agency award data. As of April 2014, OMB officials stated that the office still concurs with the recommendation, and stated that it had initiated a validation process in its June 2013 memorandum that directed agencies to use data from their financial systems or other authoritative sources to validate financial data reported on USASpending.gov. However, this validation process only focuses on the award amount but does not address the remaining 20 data elements. Further, OMB decided not to implement one oversight mechanism it had previously considered using—a dashboard displaying the quality of agency data submissions on the website. In addition, the recently enacted Digital Accountability and Transparency Act of 2014 established requirements for agency inspectors general and GAO to periodically review agency spending data. However, because these requirements do not come into effect until November 2016, it will be some time before they affect the quality of agency spending data. Thus, until OMB implements a process to ensure that agencies report consistent information for each required data element, it risks continued inconsistencies that significantly limit the accuracy of data displayed on USASpending.gov. For the 12 remaining data elements, incomplete or inadequate agency records prevented us from determining whether they were significantly consistent or inconsistent. OMB directed agencies to ensure that USASpending.gov reporting contains information that is consistent with agency records or other authoritative sources. OMB did this twice. Most directly, in its April 6, 2010, memorandum that requires agencies to ensure reporting is accurate, and defines this as “the percentage of transactions that are complete and do not have inconsistencies with systems of record or other authoritative sources.” Earlier, OMB issued more general guidance directing agencies to have a process that enables the agency to substantiate (by documentation or other means) the quality of information it disseminates. However, for 12 of the data elements, we could not determine the extent of consistency or inconsistency because agency records provided to us did not always include definitive information adequate to verify the information reported on USASpending.gov (see table 5). Four data elements in particular exhibited a significant amount of unverifiable information, meaning that at least 10 percent of awards contained unverifiable information for these data elements. Specifically, we found that: CFDA number: FFATA requires agencies to identify and display on USASpending.gov the assistance funding agency and CFDA program number. However, records provided to us did not include definitive information adequate to verify the information reported on USASpending.gov for the CFDA program number for 12 percent through 22 percent of assistance awards. For example, our representative sample contained four assistance awards identified in USASpending.gov under CFDA program 64.012, Veterans Prescription Service. However, records provided by the agency did not identify the CFDA program number nor distinguish funding from a number of other medical benefits programs. Program source agency and account codes: Treasury’s Account Symbol was selected to be used as the official program source for use on the website and is made up of two related codes: agency code and account code. OMB guidance requires agencies to ensure that award documents contain the predominant Treasury Account Symbol. However, records provided to us did not include definitive information adequate to verify the information reported on USASpending.gov for the data element agency code for 14 percent through 21 percent of awards and 19 percent through 27 percent for the account code. For example, officials from Department of Veterans Affairs Data Quality Services stated that program source information could not be provided because there was no documented link between the award and the program source information stored in the financial system. For another award, the Office of Personnel Management provided a system screenshot that documented charge account information. However, the office did not provide supporting documentation to show how the charge account information tracked to the program source reported on USASpending.gov. Principal place of performance state: Guidance on USASpending.gov states that the place of performance should be reported as the place where the majority of work takes place, while reporting. However, other specific guidance is available for contractrecords provided to us did not include definitive information adequate to verify the information reported on USASpending.gov for the place of performance state for 23 percent through 31 percent of awards. For example, five contractual transactions for pharmaceuticals, conducted by the Defense Logistics Agency, were with the same entity but agency records did not specify the vendor location responsible for distribution of these pharmaceuticals. In another example, the source system for an award at the Farm Service Agency did not contain place of performance information and therefore provided empty spaces to the system which generates reports to USASpending.gov. Several factors contributed to agencies’ lack of records to verify award information: Lack of clear guidance: No guidance on how to substantiate information reported to USASpending.gov contributed to a lack of records to verify several data elements. In particular, agencies generally use electronic systems to manage awards, but the CFDA program number is not routinely included in the electronic records. Instead, according to officials at several agencies, the CFDA program number is reported at the time information is reported to the website and officials do not document this decision in the system. Moreover, in one case, officials stated that the agency relies on an automated process whereby the address of the recipient of a contract is reported to USASpending.gov as the place of performance, which may or may not be correct. As this is an automated process, the agency does not document place of performance information in the contract file. By not capturing this information in some authoritative source, the information displayed on the website is not verifiable. Agency compliance with existing guidance: Multiple agency officials attributed a lack of program source documentation to this information being stored in a financial system and therefore not always accessible. However, OMB guidance requires agencies to ensure that award documents, and not the financial system, contain this information. Without more specific guidance on how agencies are to substantiate the information required to be reported to USASpending.gov and a process to ensure adherence to that and existing guidance, agencies will be constrained in their ability to comply with OMB guidance requiring agency validation of award information. In addition to the required data elements we tested, there are two additional types of data that should be displayed on USASpending.gov, executive compensation and subaward information. As discussed earlier, it is the responsibility of a prime awardee to report subaward and executive compensation information. We were unable to test the consistency of these data elements because agencies frequently do not maintain records to verify the information reported by the awardees. For example, the Federal Highway Administration issued 12 awards included in our sample, 3 of which displayed subaward information on the website. A senior financial policy analyst for the Office of Financial and Management Programs at the Federal Highway Administration stated that the entering of subaward information is the responsibility of a state’s Department of Transportation and not a part of the federal Department of Transportation process. She added that there is not a process whereby the Department of Transportation confirms the validity of subaward information entered in FSRS. As such, officials could not provide agency records with subaward information for any of the 12 awards included in our sample. Officials with the Federal Highway Administration added that the same issue pertained to the verification of executive compensation information displayed on USASpending.gov. Without agency records on subawards and executive compensation, we could not test whether the information reported by prime awardees is accurate. Fulfilling FFATA’s purpose of increasing transparency and accountability of federal expenditures requires that USASpending.gov contain complete and accurate information on applicable federal awards. However, our examination of awards identified significant underreporting of awards and few that contained information that was fully consistent with the information in agency records. While OMB placed additional responsibilities on agencies to ensure their reported information was accurate, our testing of the 2012 awards shows that this approach has had limited effect on the overall quality of the data. In addition, many of the specific issues we first identified in 2010, such as unclear award titles and inaccurate information on place of performance, continue to limit the reliability of USASpending.gov data. If properly implemented, OMB’s 2013 guidance on linking award financial information to information from agency financial systems and reporting the results on a quarterly basis could help address underreporting of awards. However, the ongoing inaccuracies in reported non-financial award information reinforce the need for a more comprehensive oversight process. Finally, gaps in records used to validate the data on the website continue to exist. Until these issues are addressed, any effort to validate USASpending.gov data will be hampered by uncertainties about the accuracy of the data. The transition of operational responsibility for the USASpending.gov website to Treasury presents an opportunity to reexamine the appropriate level and methods of oversight and to develop and implement procedures that more effectively ensure that data reported to USASpending.gov are complete and accurate enough to fulfill the purpose of FFATA. To improve the completeness and accuracy of data submissions to the USASpending.gov website, we recommend that the Director of the Office of Management and Budget, in collaboration with Treasury’s Fiscal Service, take the following two actions: clarify guidance on: agency responsibilities for reporting awards funded by non-annual appropriations; the applicability of USASpending.gov reporting requirements to non-classified awards associated with intelligence operations; the requirement that award titles describe the award’s purpose (consistent with our prior recommendation); agency maintenance of authoritative records adequate to verify the accuracy of required data reported for use by USASpending.gov; and develop and implement a government-wide oversight process to regularly assess the consistency of information reported by federal agencies to the website other than the award amount. To improve the completeness of foreign recipient data on the USASpending.gov website, we recommend that the Chief Executive Officer of the Millennium Challenge Corporation direct responsible officials within the Corporation’s Department of Administration and Finance to report spending information on all assistance award programs to USASpending.gov for prior and future fiscal years in accordance with statutory requirements and OMB guidance. We provided a draft of this report to GSA, OMB, the Millennium Challenge Corporation, and Treasury for review and comment. GSA provided technical comments only, which we have incorporated as appropriate. OMB generally agreed with our recommendations. The Millennium Challenge Corporation and Treasury neither agreed nor disagreed with our recommendations. Each agency’s comments that we received are discussed in more detail below. In oral comments, staff from OMB’s Office of Federal Financial Management stated that the agency generally agreed with our recommendations for OMB to clarify guidance and develop and implement a government-wide oversight process to improve completeness and accuracy of data submissions to the USASpending.gov website. OMB staff stated that these recommendations are consistent with future actions required by the Digital Accountability and Transparency Act of 2014, including establishment of a government-wide financial data standard and periodic review of agency spending data by inspectors general. Staff stated that OMB in conjunction with Treasury will consider interim steps to improve data quality but added they do not want to inhibit agency efforts to work toward implementation of the act. We agree that the newly enacted provisions of the act could eventually help improve the quality of federal spending data. We believe that our recommendations are not in conflict with adherence to the act but rather will help agencies to take steps towards achieving the core tenets of data transparency reflected in the act. Accordingly, it will be important for OMB to fully address our recommendations. In e-mail comments, staff from the Millennium Challenge Corporation’s Financial Management Division neither agreed nor disagreed with our recommendation that the corporation should report spending information on all assistance award programs to USASpending.gov for prior and future fiscal years in accordance with statutory requirements and OMB guidance. However, agency comments reflect general disagreement. Specifically, staff stated that the Millennium Challenge Corporation has made significant attempts to load foreign assistance data on USASpending.gov but has been unable to do so due to technical limitations of the website. However, we found that the attempted transmittal submitted by the Millennium Challenge Corporation for our review was rejected because the corporation omitted most of the data fields required by FFATA and guidance for the website, including several for which the required information should be readily available regardless of the recipient, including the transaction type, funding amount, and project description. Moreover, other agencies have successfully submitted data on assistance awards to foreign governments such as the Departments of Defense, Health and Human Services, and State. Agency staff also noted that the Millennium Challenge Corporation continues to work with GSA and OMB to resolve the technical issues preventing transmittal of the data in order to reach full compliance in the future. Specifically, the Millennium Challenge Corporation staff indicated that the corporation had asked OMB for an alternate reporting mechanism. However, until OMB agrees to an alternate approach, the Millennium Challenge Corporation is required by OMB guidance to use the existing mechanism to report its awards in the same manner as other agencies that make awards to foreign governments. Accordingly, we maintain our recommendation. Treasury stated in its comments that the bureau will consider the recommendations as it assumes responsibility for USASpending.gov. Treasury’s comments are reprinted in appendix V. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time we will send copies to the appropriate congressional committees; the Director of the Office of Management and Budget; the Secretary of the Department of the Treasury; the Administrator of General Services; and the Chief Executive Officer of the Millennium Challenge Corporation. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-4456 or chac@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix VI. Our objectives were to determine the extent to which (1) federal agencies report required award data and (2) inconsistencies exist between the data on USASpending.gov and records at federal agencies. To address these objectives, we examined data collection and reporting requirements under FFATA; relevant OMB guidance, other relevant requirements variously published in the Federal Acquisition Regulation, Code of Federal Regulations, and the Federal Register, and agency systems documentation. To determine the extent to which federal agencies are reporting required award data, we reviewed contract and assistance award reporting requirements as defined in FFATA, OMB guidance, and other federal guidance. We compiled a list of potential award-making agencies/programs using the Public Budget Database for contracts and We the Catalog of Federal Domestic Assistance for assistance awards.then assessed the list of agencies against reporting requirements under FFATA and relevant guidance to determine which agencies are exempt from reporting to USASpending.gov. To determine the extent to which agencies reported contractual information to USASpending.gov, we selected the agencies listed in the Public Budget Database with budget authorities greater than $400 million for fiscal year 2012. We selected these agencies to ensure the inclusion of large agencies likely to have awarded a significant number of contracts. We searched USASpending.gov to determine which of these agencies reported contractual information for fiscal year 2012. For any selected agency reporting no contractual information for fiscal year 2012, we reviewed documentation and interviewed agency officials to determine why contracts were not reported. To determine the extent to which agencies reported assistance award information to USASpending.gov, we identified all programs listed in the Catalog of Federal Domestic Assistance as of February 2013. We searched USASpending.gov to determine which programs reported information on at least one assistance award for fiscal year 2012. In addition, we selected all loan programs reporting transactions to USASpending.gov for fiscal year 2012 but reporting $0 in subsidy costs. For any program reporting no assistance award information for fiscal year 2012 or reporting a subsidy cost of $0, we interviewed agency officials and reviewed documentation to determine why information was not reported. For programs that claimed they do not make financial assistance awards and are therefore exempt from reporting to USASpending.gov, we assessed the purpose of each program against the definition for federal financial assistance for validity. For all programs categorized as either making an award and not reporting, or reporting awards late to USASpending.gov, we requested that the agency provide an estimate for obligations made under this program for fiscal year 2012. To determine the extent to which inconsistencies existed between the data on USASpending.gov and records at federal agencies, we selected a simple representative random sample of 385 fiscal year 2012 records. The probability sample was designed to estimate a rate of reporting errors with a sampling error of no greater than plus or minus 5 percentage points at the 95 percent level of confidence. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 5 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. For 21 data elements required by FFATA or OMB guidance, we compared the information reported on USASpending.gov to information contained in the originating agency’s underlying records, where available, to evaluate to what extent the data was consistent. To test the controls over the reliability of agency data, we obtained supporting documentation to confirm that the agency provided only official agency records, such as a system of records notice. When such a supporting document was unavailable, we reviewed agency transparency policy documentation, data verification and validation plans or procedures, or system source code information to ensure the reliability of the data. We did not assess the accuracy of the data contained in records provided by agencies. To the extent that we had previously assessed the reliability of a system, we worked within GAO to obtain the necessary supporting documentation. We conducted this performance audit from December 2012 to June 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 7 shows the agencies included in our review of assistance programs listed in the Catalog of Federal Domestic Assistance, broken out by the number of programs. Table 8 shows the agencies included in our representative sample, broken out by the number of contracts and assistance awards. The table further breaks down assistance awards to show the total number of awards aggregated at the county level. Table 9 lists the estimates for each of the required data elements we tested for fiscal year 2012. In addition to the contact named above, James R. Sweetman Jr., (assistant director), Mathew Bader, Carl Barden, Colleen Candrl, Nancy Glover, Wilfred Holloway, James Houtz, Ruben Montes de Oca, Kate Nielsen, David Plocher, Matthew Snyder, and Umesh Thakkar made key contributions to this report.
The Federal Funding Accountability and Transparency Act was enacted in 2006 to increase the transparency over the more than $1 trillion spent by the federal government on awards annually. Among other things, the act requires OMB to establish a website that contains data on federal awards (e.g., contracts and grants) and guidance on agency reporting requirements for the website, USASpending.gov. GAO's objectives were to determine the extent to which (1) agencies report required award data and (2) the data on the website are consistent with agency records. To assess reporting, GAO reviewed laws and guidance, analyzed reported award data, and interviewed agency officials. To assess inconsistency, GAO selected a representative sample of 385 fiscal year 2012 awards and traced them back to agency source records. Although agencies generally reported required contract information, they did not properly report information on assistance awards (e.g., grants or loans), totaling approximately $619 billion in fiscal year 2012. Specifically, 33 of 37 agencies with a budget authority of at least $400 million reported at least one contract. The other four claimed exemptions from reporting, such as the use of non-appropriated funds, but gaps in Office of Management and Budget (OMB) guidance make it unclear whether such exemptions are appropriate. Also, agencies reported required information for at least one assistance award for 1,390 of 2,183 programs listed in a federal catalog. Another 451 programs did not make an award subject to USASpending.gov reporting. However, agencies did not appropriately submit the required information for the remaining 342 programs, although many reported the information after GAO informed them of the omission. Officials with the Millennium Challenge Corporation said that they could not report because its recipients are foreign. However, OMB's guidance describes how to report foreign assistance and other agencies report such awards. OMB has taken steps to improve the completeness of assistance awards on the website, including issuing a memorandum in June 2013 directing agencies to ensure that data on USASpending.gov are consistent with agency financial records. If properly implemented, these procedures could better ensure that agencies report future assistance awards. Few awards on the website contained information that was fully consistent with agency records. GAO estimates with 95 percent confidence that between 2 percent and 7 percent of the awards contained information that was fully consistent with agencies' records for all 21 data elements examined. The element that identifies the name of the award recipient was the most consistent, while the elements that describe the award's place of performance were generally the most inconsistent. GAO could not determine whether the remaining data elements were significantly consistent or inconsistent, in large part because of incomplete or inadequate agency records. Four data elements in particular (e.g., program source information and the state of performance) had inadequacies that were significant. This means that for each of the four data elements, at least 10 percent of awards contained unverifiable information. While OMB placed responsibilities on agencies to ensure their reported information is accurate and substantiated by supporting documentation, this approach has had limited effect on the overall quality of the data on the website, reinforcing the need for a more comprehensive oversight process by OMB and more specific guidance from OMB on how agencies are to validate information reported to USASpending.gov. Until these weaknesses are addressed, any effort to use the data will be hampered by uncertainties about accuracy. To improve reliability of information on the USASpending.gov website, GAO is making recommendations to the Director of OMB to (1) clarify guidance on reporting award information and maintaining supporting records, and (2) develop and implement oversight processes to ensure that award data are consistent with agency records. GAO also recommends that the Chief Executive Officer of the Millennium Challenge Corporation report its award information, as required. OMB generally agreed with GAO's recommendations. While the Millennium Challenge Corporation neither agreed nor disagreed with the recommendation, GAO believes it is needed, as discussed in this report.
Financial planning typically involves a variety of services, including preparing financial plans for clients based on their financial circumstances and objectives and making recommendations for specific actions clients may take. In many cases, financial planners also help implement these recommendations by, for example, providing insurance products, securities, or other investments. Individuals who provide financial planning services may call themselves a variety of different titles, such as financial planner, financial consultant, financial adviser, trust advisor, or wealth manager. In addition, many financial planners have privately conferred professional designations or certifications, such as Certified Financial Planner®, Chartered Financial Consultant®, or Personal Financial Specialist. The number of financial planners in the United States rose from approximately 94,000 in 2000 to 208,400 in 2008, according to the Bureau of Labor Statistics. The bureau projects the number will rise to 271,200 by 2018 because of the need for advisers to assist the millions of workers expected to retire in the next 10 years. According to the bureau, 29 percent of financial planners are self-employed and the remaining 71 percent are employees of firms, some of them large entities with offices nationwide that provide a variety of financial services. The median annual wage for financial planners was $68,200 in May 2009. According to an analysis of the 2007 Survey of Consumer Finances, the most recent year for which survey results are available, in 2007 about 22 percent of U.S. households used a financial planner for investment and saving decisions and about 12 percent of U.S. households used a financial planner for making credit and borrowing decisions. Those households most likely to use a financial planner were those with higher incomes. For example, 37 percent of households in the top income quartile used a financial planner to make investment and saving decisions compared to 10 percent of households in the bottom quartile. Financial planners are primarily regulated by federal and state investment adviser laws, because planners typically provide advice about securities as part of their business. In addition, financial planners that sell securities or insurance products are subject to applicable laws governing broker- dealers and insurance agents. Certain laws and regulations can also apply to the use of the titles, designations, and marketing materials that financial planners use. There is no specific, direct regulation of “financial planners” per se at the federal or state level. However, the activities of financial planners are primarily regulated under federal and state laws and regulations governing investment advisers—that is, individuals or firms that provide investment advice about securities for compensation. According to SEC staff, financial planning normally includes making general or specific recommendations about securities, insurance, savings, and anticipated retirement. SEC has issued guidance that broadly interprets the Investment Advisers Act of 1940 (Advisers Act) to apply to most financial planners, because the advisory services they offer clients typically include providing advice about securities for compensation. Similarly, NASAA representatives told us that states take a similar approach on the application of investment adviser laws to financial planners and, as a result, generally register and oversee financial planners as investment advisers. As investment advisers, financial planners are subject to a fiduciary standard of care when they provide advisory services, so that the planner “ held to the highest standards of conduct and must act in the best interest of clients.” SEC and state securities departments share responsibility for the oversight of investment advisers in accordance with the Advisers Act. Under that act, SEC generally oversees investment adviser firms that manage $25 million or more in client assets, and the states that require registration oversee those firms that manage less. However, as a result of section 410 of the Dodd-Frank Act, as of July 2011 the states generally will have registration and oversight responsibilities for investment adviser firms that manage less than $100 million in client assets, instead of firms that manage less than $25 million in assets as under current law. This will result in the states gaining responsibility for firms with assets under management between $25 million and $100 million. As shown in figure 1, as of October 2010, of the approximately 16,000 investment adviser firms providing financial planning services, the states were overseeing about 11,100 firms and SEC was overseeing about 4,900 such firms. However, in July 2011 about 2,400 of investment adviser firms that provided financial planning services (15 percent of the 16,000 firms) may shift from SEC to state oversight. SEC’s supervision of investment adviser firms includes evaluating their compliance with federal securities laws by conducting examinations of firms—including reviewing disclosures made to customers—and investigating and imposing sanctions for violations of securities laws. According to SEC staff, in its examinations, the agency takes specific steps to review the financial planning services of investment advisers. For example, SEC may review a sample of financial plans that the firm prepared for its customers to check whether the firm’s advice and investment recommendations are consistent with customers’ goals, the contract with the firm, and the firm’s disclosures. However, the frequency with which SEC conducts these examinations varies, largely because of resource constraints faced by the agency. SEC staff told us that the agency examined only about 10 percent of the investment advisers it supervises in 2009. In addition, they noted that generally an investment adviser is examined, on average, every 12 to 15 years, although firms considered to be of higher risk are examined more frequently. In 2007, we noted that harmful practices could go undetected because investment adviser firms rated lower-risk are unlikely to undergo routine examinations within a reasonable period of time, if at all. According to NASAA, state oversight of investment adviser firms generally includes activities similar to those undertaken by SEC, including taking specific steps to review a firm’s financial planning services. According to NASAA, states generally register not just investment adviser firms but also investment adviser representatives—that is, individuals who provide investment advice and work for a state- or federally registered investment adviser firm. In addition to providing advisory services, such as developing a financial plan, financial planners generally help clients implement the plan by making specific recommendations and by selling securities, insurance products, and other investments. SEC data show that, as of October 2010, 19 percent of investment adviser firms that provided financial planning services also provided brokerage services, and 27 percent provided insurance. Financial planners that provide brokerage services, such as buying or selling stocks, bonds, or mutual fund shares, are subject to broker-dealer regulation at the federal and state levels. At the federal level, SEC oversees U.S. broker-dealers, and SEC’s oversight is supplemented by self- regulatory organizations (SRO). The primary SRO for broker-dealers is FINRA. State securities offices work in conjunction with SEC and FINRA to regulate securities firms. Salespersons working for broker-dealers are subject to state registration requirements, including examinations. About half of broker-dealers were examined in 2009 by SEC and SROs. Under broker-dealer regulation, financial planners are held to a suitability standard of care when making a recommendation to a client to buy or sell a security, meaning that they must recommend those securities that they reasonably believe are suitable for the customer. Financial planners that sell insurance products, such as life insurance or annuities, must be licensed by the states to sell these products and are subject to state insurance regulation. In contrast to securities entities (other than national banks) that are subject to dual federal and state oversight, the states are generally responsible for regulating the business of insurance. When acting as insurance agents, financial planners are subject to state standard of care requirements, which can vary by product and by state. As of October 2010, 32 states had adopted a previous version of the NAIC Suitability in Annuities Transactions Model Regulation, according to NAIC. In general, this regulation requires insurance agents to appropriately address consumers’ insurance needs and financial objectives at the time of an annuity transaction. Thirty-four states had also adopted the Life Insurance Disclosure Model Regulation in a uniform and substantially similar manner as of July 2010, according to NAIC. This regulation does not include a suitability requirement, although it does require insurers to provide customers with information that will improve their ability to select the most appropriate life insurance plan for their needs and improve their understanding of the policy’s basic features. Financial planners that sell variable insurance products, such as variable life insurance or variable annuities, are subject to both state insurance regulation and broker-dealer regulation, because these products are regulated as both securities and insurance products. When selling variable insurance, financial planners are subject to FINRA sales practice standards requiring that such sales be subject to suitability standards. In addition, other FINRA rules and guidance, such as those governing standards for communication with the public, apply to the sale of variable insurance products. In addition, as previously discussed, 32 states also generally require insurance agents and companies to appropriately address a consumer’s insurance needs and financial objectives at the time of an annuity transaction. However, in the past, we have reported that the effectiveness of market conduct regulation—that is, examination of the sales practices and behavior of insurers—may be limited by a lack of reciprocity and uniformity, which may lead to uneven consumer protection across states. At the federal level, SEC and FINRA have regulations on advertising and standards of communication that apply to the strategies investment adviser firms and broker-dealers use to market their financial planning services. For example, SEC-registered investment advisers must follow SEC regulations on advertising and other communications, which prohibit false or misleading advertisements, and these regulations apply to investment advisers’ marketing of financial planning services. FINRA regulations on standards for communication with the public similarly prohibit false, exaggerated, unwarranted, or misleading statements or claims by broker-dealers, and broker-dealer advertisements are subject to additional approval, filing, and recordkeeping requirements and review procedures. According to many company officials we spoke with, their companies responded to these requirements by putting procedures in place to determine which designations and titles their registered representatives may use in their marketing materials, such as business cards. SEC and state securities regulators also regulate information that investment advisers are required to disclose to their clients. In the Uniform Application for Investment Adviser Registration (Form ADV), regulators have typically required investment adviser firms to provide new and prospective clients with background information, such as the basis of the advisory fees, types of services provided (such as financial planning services), and strategies for addressing conflicts of interest that may arise from their business activities. Recent changes to Form ADV are designed to improve the disclosures that firms provide to clients. For example, firms must now provide clients with information about the advisory personnel on whom they rely for investment advice, including the requirements and applicability of any professional designations or certifications advisers may choose to include in their background information. Most states regulate the use of the title “financial planner,” and state securities and insurance laws can apply to the misuse of this title and other titles. For example, according to NASAA, at least 29 states specifically include financial planners in their definition of investment adviser. According to NAIC, in many states, regulators can use unfair trade practice laws to prohibit insurance agents from holding themselves out as financial planners when in fact they are only engaged in the sale of life or annuity insurance products. However, as noted earlier, the effectiveness of the regulation of insurers’ market conduct varies across states. In particular, in 2010 we noted inconsistencies in the state regulation of life settlements, a potentially high-risk transaction in which financial planners may participate. In addition, we were told some states had adopted regulations limiting the use of “senior-specific designations”—that is, designations that imply expertise or special training in advising senior citizen or elderly investors. According to NAIC, as of December 2010, 25 states had adopted in a uniform and substantially similar manner the NAIC Model Regulation on the Use of Senior-Specific Certifications and Professional Designations in the Sale of Life Insurance and Annuities, which limits the use of senior- specific designations by insurance agents. According to NASAA, as of December 2010, 31 states had adopted—and at least 9 other states were planning to adopt—the NASAA Model Rule on the Use of Senior-Specific Certifications and Professional Designations, which prohibits the misleading use of senior-specific designations by investment adviser representatives and other financial professionals. The regulatory system for financial planners covers most activities in which they engage. However, enforcement of regulation may be inconsistent and some questions exist about consumers’ understanding of the roles, standards of care, and titles and designations that a financial planner may have. The ability of regulators to identify potential problems is limited because they do not specifically track complaints, inspections, and enforcement actions specific to financial planning services. Although there is no single stand-alone regulatory body with oversight of financial planners, the regulatory structure for financial planners covers most activities in which they engage. As discussed earlier, and summarized in figure 2, the primary activities a financial planner performs are subject to existing regulation at the federal or state level, primarily through regulation pertaining to investment advisers, broker-dealers, and insurance agents. As such, SEC, FINRA, and NASAA staff, a majority of state securities regulators, financial industry representatives, consumer groups, and academic and subject matter experts with whom we spoke said that, in general, they believe the regulatory structure for financial planners is comprehensive, although, as discussed below, the attention paid to enforcing existing regulation has varied. As noted earlier, the activities a financial planner normally engages in generally include advice related to securities—and such activities make financial planners subject to regulation under the Advisers Act. One industry association and an academic expert noted that it would be very difficult to provide financial planning services without offering investment advice or considering securities. SEC staff told us that financial planners holding even broad discussions of securities—for example, what proportion of a portfolio should be invested in stocks—would be required to register as investment advisers or investment adviser representatives. In theory, a financial planner could offer only services that do not fall under existing regulatory regimes—for example, advice on household budgeting—but such an example is likely hypothetical and such a business model may be hard to sustain. SEC and NASAA staff, a majority of the state securities regulators we spoke with, and many representatives of the financial services industry told us that they were not aware of any individuals serving as financial planners who were not regulated as investment advisers or regulated under another regulatory regime. Some regulators and industry representatives also said that, to the extent that financial planners offered services that did not fall under such regulation, the new Bureau of Consumer Financial Protection potentially could have jurisdiction over such services. However, not everyone agreed that regulation of financial planners was comprehensive. One group, the Financial Planning Coalition, has argued that a regulatory gap exists because no single law governs the delivery of the broad array of financial advice to the public. According to the coalition, the provision of integrated financial advice—which would cover topics such as selecting and managing investments, income taxes, saving for college, home ownership, retirement, insurance, and estate planning— is unregulated. Instead, the coalition says that there is patchwork regulation of financial planning advice, and it views having two sets of laws—one regulating the provision of investment advice and another regulating the sale of products—as problematic. In addition, although the regulatory structure itself for financial planners may generally be comprehensive, attention paid to enforcing existing statute and regulation has varied. For example, as noted earlier, due to resource constraints, the examination of SEC-supervised investment advisers is infrequent. Further, as also noted earlier, market conduct regulation of insurers—which would include the examination of the sales practices and behavior of financial planners selling insurance products— has been inconsistent. Some representatives of industry associations told us that they believed that a better alternative to additional regulation of financial planners would be increased enforcement of existing law and regulation, particularly related to fraud and unfair trade practices. Certain professionals—including attorneys, certified public accountants, broker-dealers, and teachers—who provide financial planning advice are exempt from regulation under the Advisers Act if such advice is “solely incidental” to their other business activities. According to an SEC staff interpretation, this exemption would not apply to individuals who held themselves out to the public as providing financial planning services, and would apply only to individuals who provided specific investment advice on anything other than “rare, isolated and non-periodic instances.” Banks and bank employees are also excluded from the Advisers Act and are subject to separate banking regulation. The American Bankers Association told us that the financial planning activities of bank employees such as trust advisors or wealth managers were typically utilized by clients with more than $5 million in investable assets. The association noted that these activities were subject to a fiduciary standard and the applicable supervision of federal and state banking regulators. Most regulators and academic experts and many financial services industry representatives we spoke with told us that there is some overlap in the regulation of individuals who serve as financial planners because such individuals might be subject to oversight by different regulatory bodies for the different services they provide. For example, a financial planner who recommends and sells variable annuities as part of a financial plan is regulated as a registered representative of a broker-dealer as well as an insurance agent under applicable federal and state laws. However, some state regulators we spoke with told us that such overlap may be appropriate since the regulatory regimes cover different functional areas. As seen in figure 3, financial planners are subject to different standards of care in their capacities as investment advisers, broker-dealers, and insurance agents. Fiduciary Standard of Care: As noted earlier, investment advisers are subject to a fiduciary standard of care—that is, they must act in their client’s best interest, ensure that recommended investments are suitable for the client, and disclose to the client any material conflicts of interest. According to SEC and NASAA representatives, the fiduciary standard applies even when investment advisers provide advice or recommendations about products other than securities, such as insurance, in conjunction with advice about securities. Suitability Standard of Care When Recommending Security Products: FINRA regulation requires broker-dealers to adhere to a suitability standard when rendering investment recommendations—that is, they must recommend only those securities that they reasonably believe are suitable for the customer. Unlike the fiduciary standard, suitability rules do not necessarily require that the client’s best interest be served. According to FINRA staff, up-front general disclosure of a broker-dealer’s business activities and relationships that may cause conflicts of interest is not required. However, according to SEC, broker-dealers are subject to many FINRA rules that require disclosure of conflicts in certain situations, although SEC staff also note that those rules may not cover every possible conflict of interest, and disclosure may occur after conflicted advice has already been given. Suitability Standard of Care When Recommending Insurance Products: Standards of care for the recommendation and sale of insurance products vary by product and by state. For example, as seen earlier, NAIC’s model regulations on the suitability standard for annuity transactions, adopted by some states but not others, require consideration of the insurance needs and financial objectives of the customer, while NAIC’s model regulation for life insurance does not include a suitability requirement per se. Conflicts of interest can exist when, for example, a financial services professional earns a commission on a product sold to a client. Under the fiduciary standard applicable to investment advisers, financial planners must mitigate any potential conflicts of interest and disclose any that remain. But under a suitability standard applicable to broker-dealers, conflicts of interest may exist and generally may not need to be disclosed up-front. For example, as confirmed by FINRA, financial planners functioning as broker-dealers may recommend a product that provides them with a higher commission than a similar product with a lower commission, as long as the product is suitable and the broker-dealer complies with other requirements. Because the same individual or firm can offer a variety of services to a client—a practice sometimes referred to as “hat switching”—these services could be subject to different standards of care. As such, representatives of consumer groups and others have expressed concern that consumers may not fully understand which standard of care, if any, applies to a financial professional. As shown above, the standards of care—and the extent to which conflicts of interest must be disclosed—can vary depending on the capacity in which the individual serves. A 2007 report by the Financial Planning Association stated that “it would be difficult, if not impossible, for an individual investor to discern when the adviser was acting in a fiduciary capacity or in a non-fiduciary capacity.” A 2008 SEC study conducted by the RAND Corporation, consisting of a national household survey and six focus group discussions with investors, found that consumers generally did not understand not only the distinction between a suitability and fiduciary standard of care but also the differences between broker-dealers and investment advisers. Similarly, a 2010 national study of investors found that most were confused about which financial professionals are required to operate under a fiduciary standard that requires professionals to put their client’s interest ahead of their own. Representatives of financial services firms that provide financial planning told us they believe that clients are sufficiently informed about the differing roles and accompanying standards of care that a firm representative may have. They noted that when they provide both advisory and transactional services to the same customer, each service—such as planning, brokerage, or insurance sales—is accompanied by a separate contract or agreement with the customer. These agreements disclose that the firm’s representatives have different obligations to the customer depending on their role. In addition, once a financial plan has been provided, some companies told us that they have customers sign an additional agreement stating that the financial planning relationship with the firm has ended. Recent revisions by SEC to Form ADV disclosure requirements were designed to address, among other things, consumer understanding of potential conflicts of interest by investment advisers and their representatives. Effective October 12, 2010, SEC revised Form ADV, Part 2, which financial service firms must provide to new and prospective clients. The new form, which must be written in plain English, is intended to help consumers better understand the activities and affiliations of their investment adviser. It requires additional disclosures about a firm’s conflicts of interest, compensation, business activities, and disciplinary information that is material to an evaluation of the adviser’s integrity. Similarly, in October 2010 FINRA issued a regulatory notice requesting comments on a concept proposal regarding possible new disclosure requirements that would, among other things, detail for consumers in plain English the conflicts of interest that broker-dealers may have associated with their services. Section 913 of the Dodd-Frank Act requires SEC to study the substantive differences between the applicable standards of care for broker-dealers and investment advisers; the effectiveness of the existing legal or regulatory standards of care for brokers, dealers, and investment advisers; and consumers’ ability to understand the different standards of care. SEC will also consider the potential impact on retail customers of imposing the same fiduciary standard that now applies to investment advisers on broker-dealers when they provide personalized investment advice. Under the act, SEC may promulgate rules to address these issues and is specifically authorized to establish a uniform fiduciary duty for broker- dealers and investment advisers that provide personalized investment advice about securities to customers. As a result, further clarification of these standards may be forthcoming. FINRA officials told us that they support a fiduciary standard of care for broker-dealers when they provide personalized investment advice to retail customers. Consumer confusion on standards of care may also be a source of concern with regard to the sale of some insurance products. A 2010 national survey of investors found that 60 percent mistakenly believed that insurance agents had a fiduciary duty to their clients. Some insurance products, such as annuities, are complex and can be difficult to understand, and annuity sales practices have drawn complaints from consumers and various regulatory actions from state regulators as well as SEC and FINRA for many years. According to NAIC, many states have requirements that insurance salespersons sell annuities only if the product is suitable for the customer. However, NAIC notes that some states do not have a suitability requirement for annuities. Consumer groups and others have stated that high sales commissions on certain insurance products, including annuities, may provide salespersons with a substantial financial incentive to sell these products, which may or may not be in the consumer’s best interest. As a result of section 989J of the Dodd-Frank Act, one type of annuity—the indexed annuity—is to be regulated by states as an insurance product, rather than regulated by SEC as a security, under certain conditions. SEC’s pending study related to the applicable standards of care for broker- dealers and investment advisers will not look at issues of insurance that fall outside of SEC’s jurisdiction. NAIC has not undertaken a similar study regarding consumer understanding of the standard of care for insurance agents. As we reported in the past, financial markets function best when consumers understand how financial service providers and products work and know how to choose among them. Given the evidence of consumer confusion about differing standards of care and given the increased risks that certain insurance products can pose, there could be benefits to an NAIC review of consumers’ understanding of standards of care for high- risk insurance products. Individuals who provide financial planning services may use a variety of titles when presenting themselves to the public, including financial planner, financial consultant, and financial adviser, among many others. However, evidence suggests that the different titles financial professionals use can be confusing to consumers. The 2008 RAND study found that even experienced investors were confused about the titles used by broker- dealers and investment advisers, including financial planner and financial adviser. Similarly, in consumer focus groups of investors conducted by SEC in 2005 as part of a rulemaking process, participants were generally unclear about the distinctions among titles, including broker, investment adviser, and financial planner. In addition, a representative of one consumer advocacy group has expressed concern that some financial professionals may use as a marketing tool titles suggesting that they provide financial planning services, when in fact they are only selling products. One industry group, the Financial Planning Coalition, also has noted that some individuals may hold themselves out as financial planners without meeting minimum training or ethical requirements. Federal and state regulators told us they generally focused their oversight and enforcement actions on financial planners’ activities rather than the titles they use. Moreover, NASAA has said that no matter what title financial planners use, most are required to register as investment adviser representatives and must satisfy certain competency requirements, including passing an examination or obtaining a recognized professional designation. Financial planners’ professional designations are typically conferred by a professional or trade organization. These designations may indicate that a planner has passed an examination, met certain educational requirements, or had related professional experience. Some of these designations require extensive classroom training and examination requirements and include codes of ethics with the ability to remove the designation in the event of violations. State securities regulators view five specific designations as meeting or exceeding the registration requirements for investment adviser representatives, according to NASAA, and allow these professional designations to satisfy necessary competency requirements for prospective investment adviser representatives. For example, one of these five designations requires a bachelor’s degree from an accredited college or university, 3 years of full-time personal financial planning experience, a certification examination, and 30 hours of continuing education every 2 years. The criteria used by organizations that grant professional designations for financial professionals vary greatly. FINRA has stated that while some designations require formal certification procedures, including examinations and continuing professional education credits, others may merely signify that membership dues have been paid. The Financial Planning Coalition and The American College, a nonprofit educational institution that confers several financial designations, similarly told us that privately conferred designations range from those with rigorous competency, practice, and ethical standards and enforcement to those that can be obtained with minimal effort and no ongoing evaluation. As noted earlier, designations that imply expertise or special training in advising senior citizen or elderly investors have received particular attention from regulators. A joint report of SEC, FINRA, and NASAA described cases in which financial professionals targeted seniors by using senior-specific designations that implied that they had a particular expertise for senior investors, when in fact they did not; as noted earlier, NASAA and NAIC have developed a model rule to address the issue. The report also noted these professionals targeted seniors through the use of so-called free-lunch seminars, where free meals are offered in exchange for attendance of a financial education seminar. However, the focus of the seminars was actually on the sale of products rather than the provision of financial advice. Given the large number of designations financial planners may use, concerns exist that consumers may have difficulty distinguishing among them. To alleviate customer confusion, FINRA has developed a Web site for consumers that provides the required qualifications and other information about the designations used by securities professionals. The site lists more than 100 professional designations, 5 of which include the term “financial planner,” and 24 of which contain comparable terms such as financial consultant or counselor. The American College told us that it had identified 270 financial services designations. Officials from NASAA, NAIC, and a consumer advocacy organization told us that consumers might have difficulty distinguishing among the various designations. Officials from The American College told us that the number of designations itself was not necessarily a cause for concern, but rather consumers’ broadly held misperception that all designations or credentials are equal. To help address these concerns, FINRA plans to expand its Web site on professional designations to include several dozen additional designations related to insurance. However, FINRA officials noted that consumers’ use of this tool has been limited. For example, in 2009, the site received only 55,765 visits. A recent national study of the financial capability of American adults sponsored by FINRA found that only 15 percent of adults who had used a financial professional in the last 5 years claimed to have checked the background, registration, or license of a financial professional. In addition, SEC staff acknowledged that there have been concerns about confusing designations, and SEC’s October 2010 changes to investment adviser disclosure requirements mandate that investment adviser representatives who list professional designations and certifications in their background information also provide the qualifications needed for these designations, so that the consumer can understand the value of the designation for the services being provided. Section 917 of the Dodd-Frank Act includes a requirement that SEC conduct a study identifying the existing level of financial literacy among retail investors, including the most useful and understandable relevant information that they need to make informed financial decisions before engaging a financial intermediary. While the section does not specifically mention the issue of financial planners’ titles and designations, the confusion we found to exist could potentially be addressed or mitigated if SEC incorporated this issue into its overall review of financial literacy among investors. SEC staff told us that at this time its review would not likely address this issue, although it would address such things as the need for conducting background checks on financial professionals. Financial markets function best when consumers have information sufficient to understand and assess financial service providers and products. Including financial planners’ use of titles and designations in SEC’s financial literacy review could provide useful information on the implications of consumers’ confusion on this issue. Available data do not show a large number of consumer complaints and enforcement actions involving financial planners, but the exact extent to which financial planners may be a source of problems is unknown. We were able to find limited information on consumer complaints from various agencies. For example, representatives of FTC and the Better Business Bureau said that they had received relatively few complaints related to financial planners. FTC staff told us that a search in its Consumer Sentinel Network database for the phrase “financial planner” found 141 complaints in the 5-year period from 2005 through 2010 but that only a handful of these appeared to actually involve activity connected to the financial planning profession. The staff added that additional searches on other titles possibly used by financial planners, such as financial consultant and personal financial adviser, did not yield significant additional complaints. In addition, a representative of the Better Business Bureau told us that it had received relatively few complaints related to financial planners, although the representative noted that additional complaints might exist in broader categories, such as “financial services.” Consumer complaint data may not be an accurate gauge of the extent of problems. Complaints may represent only a small portion of potential problems and complaints related to “financial planners” may not always be recorded as such. As we have previously reported, consumers also may not always know where they can report complaints. At the same time, some complaints that are made may not always be valid. SEC has limited information on the extent to which the activities of financial planners may be causing consumers harm. The agency does record and track whether federally and state-registered investment adviser firms provide financial planning services, but its data tracking systems for complaints, examination results, and enforcement actions are not programmed to readily determine and track whether the complaint, result, or action was specifically related to a financial planner or financial planning service. For example, SEC staff told us the number of complaints about financial planners would be undercounted in their data system that receives and tracks public inquiries, known as the Investor Response Information System, because this code would likely be used only if it could not be identified whether the person (or firm) was an investment adviser or broker-dealer. In addition, the data system that SEC uses to record examination results, known as the Super Tracking and Reporting System, does not allow the agency to identify and extract examination results specific to the financial planning services of investment advisers. However, SEC staff told us that a review of its Investor Response Information System identified 51 complaints or inquiries that had been recorded using their code for issues related to “financial planners” between November 2009 and October 2010. SEC staff told us that the complaints most often involved allegations of unsuitable investments or fraud, such as misappropriation of funds. A review of a separate SEC database called Tips, Complaints, and Referrals—an interim system that was implemented in March 2010—found 124 allegations of problems possibly related to financial planners from March 2010 to October 2010. SEC staff told us that they did not have comprehensive data on the extent of enforcement activities related to financial planners per se. In addition, NASAA said that states generally do not track enforcement data specific to financial planners. At our request, SEC and NASAA provided us with examples of enforcement actions related to individuals who held themselves out as financial planners. Using a keyword search, SEC identified 10 such formal enforcement actions between August 2009 and August 2010. According to SEC documents, these cases involved allegations of such activities as defrauding clients through marketing schemes, receiving kickbacks without making proper disclosures, and misappropriation of client funds. Although NASAA also did not have comprehensive data on enforcement activities involving financial planners, representatives provided us with examples of 36 actions brought by 30 states from 1986 to 2010. These cases involved allegations of such things as the sale of unsuitable products, fraudulent misrepresentation of qualifications, failure to register as an investment adviser, and misuse of client funds for personal expenses. Because of limitations in how data are gathered and tracked, SEC and state securities regulators are not currently able to readily determine the extent to which financial planning services may be causing consumers harm. NASAA officials told us that, as with SEC, state securities regulators did not typically or routinely track potential problems specific to financial planners. SEC and NASAA representatives told us that they had been meeting periodically in recent months to prepare for the transition from federal to state oversight of certain additional investment adviser firms, as mandated under the Dodd-Frank Act, but they said that oversight of financial planners in particular had not been part of these discussions. SEC staff have noted that additional tracking could consume staff time and other resources. They also said that because there are no laws that directly require registration, recordkeeping, and other responsibilities of “financial planners” per se, tracking such findings relating to those entities would require expenditure of resources on something that SEC does not have direct responsibility to oversee. Yet as we have reported in the past, while we recognize the need to balance the cost of data collection efforts against the usefulness of the data, a regulatory system should have data sufficient to identify risks and problem areas and support decisionmaking. Given the significant growth in the financial planning industry, ongoing concerns about potential conflicts of interest, and consumer confusion about standards of care, regulators may benefit from identifying ways to get better information on the extent of problems specifically involving financial planners and financial planning services. Over the past few years, a number of stakeholders—including consumer groups, FINRA, and trade associations representing financial planners, securities firms, and insurance firms—have proposed different approaches to the regulation of financial planners. Following are four of the most prominent approaches, each of which has both advantages and disadvantages. In 2009, the Financial Planning Coalition—comprised of the Certified Financial Planner Board of Standards, Financial Planning Association, and the National Association of Personal Financial Advisors—proposed that Congress establish a professional standards-setting oversight board for financial planners. According to the coalition, its proposed legislation would establish federal regulation of financial planners by allowing SEC to recognize a financial planner oversight board that would set professional standards for and oversee the activities of individual financial planners, although not financial planning firms. For example, the board would have the authority to establish baseline competency standards in the areas of education, examination, and continuing education, and would be required to establish ethical standards designed to prevent fraudulent and manipulative acts and practices. It would also have the authority to require registration or licensing of financial planners and to perform investigative and disciplinary actions. Under the proposal, states would retain antifraud authority over financial planners as well as full oversight for financial planners’ investment advisory activity. However, states would not be allowed to impose additional licensing or registration requirements for financial planners or set separate standards of conduct. Supporters of a new oversight board have noted that its structure and governance would be analogous to the Public Company Accounting Oversight Board, a private nonprofit organization subject to SEC oversight that in turn oversees the audits of public companies that are subject to securities laws. According to the Financial Planning Coalition, a potential advantage of this approach is that it would treat financial planning as a distinct profession and would regulate across the full spectrum of activities in which financial planners may engage, including activities related to investments, taxes, education, retirement planning, estate planning, insurance, and household budgeting. Proponents argue that a financial planning oversight board would also help ensure high standards and consistent regulation for all financial planners by establishing common standards for competency, professional practices, and ethics. However, many securities regulators and financial services trade associations with whom we spoke said that they believe such a board would overlap with and in many ways duplicate existing state and federal regulations, which already cover virtually all of the products and services that a financial planner provides. Some added that the board would entail unnecessary additional financial costs and administrative burdens for the government and regulated entities. In addition, some opponents of this approach question whether “financial planning” should be thought of as a distinct profession that requires its own regulatory structure, noting that financial planning is not easily defined and can span multiple professions, including accounting, insurance, investment advice, and law. One consumer group also noted that the regulation of individuals and professions is typically a state rather than a federal responsibility. Finally, we note that the analogy to the Public Company Accounting Oversight Board may not be apt. That board was created in response to a crisis involving high-profile bankruptcies and investor losses caused in part by inadequacies among public accounting firms. In the case of financial planners, there is limited evidence of an analogous crisis or, as noted earlier, of severe harm to consumers. A number of proposals over the years have considered having FINRA or a newly created SRO supplement SEC oversight of investment advisers. These proposals date back to at least 1963, when an SEC study recommended that all registered investment advisers be required to be a member of an SRO. In 1986, the National Association of Securities Dealers, a predecessor to FINRA, explored the feasibility of examining the investment advisory activities of members who were also registered as investment advisers. The House of Representatives passed a bill in 1993 that would have amended the Advisers Act to authorize the creation of an “inspection only” SRO for investment advisers, although the bill did not become law. In 2003, SEC requested comments on whether one or more SROs should be established for investment advisers, citing, among other reasons, concerns that the agency’s own resources were inadequate to address the growing numbers of advisers. However, SEC did not take further action. Section 914 of the Dodd-Frank Act required SEC to issue a study in January 2011 on the extent to which one or more SROs for investment advisers would improve the frequency of examinations of investment advisers. According to FINRA, the primary advantage of augmenting investment adviser oversight with an SRO is that doing so would allow for more frequent examinations, given the limited resources of states and SEC. The Financial Services Institute, an advocacy organization for independent broker-dealers and financial advisers, has stated that an industry-funded SRO with the resources necessary to appropriately supervise and examine all investment advisers would close the gap that exists between the regulation of broker-dealers and investment advisers. FINRA said that it finds this gap troubling given the overlap between the two groups (approximately 88 percent of all registered advisory representatives are also broker-dealer representatives). FINRA adds that any SRO should operate subject to strong SEC oversight and that releasing SEC of some of its responsibilities for investment advisers would free up SEC resources for other regulatory activities. However, NASAA, some state securities regulators, and one academic with whom we spoke opposed adding an SRO component to the regulatory authority of investment advisers. NASAA said it believed that investment adviser regulation is a governmental function that should not be outsourced to a private, third-party organization that lacks the objectivity, independence, expertise, and experience of a government regulator. Further, NASAA said it is concerned with the lack of transparency associated with regulation by SROs because, unlike government regulators, they are not subject to open records laws through which the investing public can obtain information. Two public interest groups, including the Consumer Federation of America, have asserted that one SRO—FINRA— has an “industry mindset” that has not always put consumer protection at the forefront. In addition, the Investment Adviser Association and two other organizations we interviewed have noted that funding an SRO and complying with its rules can impose additional costs on a firm. Proposals have been made to extend coverage of the fiduciary standard of care to all those who provide financial planning services. Some consumer groups and others have stated that a fiduciary standard should apply to anyone who provides personalized investment advice about securities to retail customers, including insurance agents who recommend securities. The Financial Planning Coalition has proposed that the fiduciary standard apply to all those who hold themselves out as financial planners. Proponents of extending the fiduciary standard of care, which also include consumer groups and NASAA, generally maintain that consumers should be able to expect that financial professionals they work with will act in their best interests. They say that a fiduciary standard is more protective of consumers’ interests than a suitability standard, which requires only that a product be suitable for a consumer rather than in the consumer’s best interest. In addition, the Financial Planning Coalition notes that extending a fiduciary standard would somewhat reduce consumer confusion about financial planners that are covered by the fiduciary standard in some capacities (such as providing investment advice) but not in others (such as selling a product). However, some participants in the insurance and broker-dealer industries have argued that a fiduciary standard of care is vague and undefined. They say that replacing a suitability standard with a fiduciary standard could actually weaken consumer protections since the suitability of a product is easier to define and enforce. Opponents also have argued that complying with a fiduciary standard would increase compliance costs that in turn would be passed along to consumers or otherwise lead to fewer consumer choices. The American College has proposed clarifying the credentials and standards of financial professionals, including financial planners. In particular, it has proposed creating a working group of existing academic and practice experts to establish voluntary credentialing standards for financial professionals. As noted previously, consumers may be unable to distinguish among the various financial planning designations that exist and may not understand the requirements that underpin them. Clarifying the credentials and standards of financial professionals could conceivably take the form of prohibiting the use of certain designations, as has been done for senior-specific designations in some states, or establishing minimum education, testing, or work experience requirements needed to obtain a designation. The American College has stated that greater oversight of such credentials and standards could provide a “seal of approval” that would generally raise the quality and competence of financial professionals, including financial planners, help consumers distinguish among the various credentials, and help screen out less qualified or reputable players. However, the ultimate effectiveness of such an approach is not clear, since the extent to which consumers take designations into account when selecting or working with financial planners is unknown, as is the extent of the harm caused by misleading designations. In addition, implementation and ongoing monitoring of financial planners’ credentials and standards could be challenging. Further, the issue of unclear designations has already been addressed to some extent—for example, as noted earlier, some states regulate the use of certain senior-specific designations and allow five professional designations to satisfy necessary competency requirements for prospective investment adviser representatives. State securities regulators also have the authority to pursue the misleading use of credentials through their existing antifraud authority. In general, a majority of the regulatory agencies, consumer groups, academics, trade associations, and individual financial services companies with which we spoke did not favor substantial structural change in the regulation of financial planners. In particular, few supported an additional oversight body, which was generally seen as duplicative of existing regulation. Some stakeholders in the securities and insurance industries noted that given the dynamic financial regulatory environment under way as a result of the Dodd-Frank Act—such as creation of a new Bureau of Consumer Financial Protection—more time should pass before additional regulatory changes related to financial planning services were considered. Several industry associations also noted that opportunities existed for greater enforcement of existing law and regulation, as discussed earlier. Existing statutes and regulations appear to cover the great majority of financial planning services, and individual financial planners nearly always fall under one or more regulatory regimes, depending on their activities. While no single law governs the broad array of activities in which financial planners may engage, given available information, it does not appear that an additional layer of regulation specific to financial planners is warranted at this time. At the same time, as we have previously reported, more robust enforcement of existing laws could strengthen oversight efforts. In addition, there are some actions that can be taken that may help address consumer protection issues associated with the oversight of financial planners. First, as we have reported, financial markets function best when consumers understand how financial providers and products work and know how to choose among them. Yet consumers may be unclear about standards of care that apply to financial professionals, particularly when the same individual or firm offers multiple services that have differing standards of care. As such, consumers may not always know whether and when a financial planner is required to serve their best interest. While SEC is currently addressing the issue of whether the fiduciary standard of care should be extended to broker-dealers when they provide personalized investment advice about securities, the agency is not addressing whether this extension should also apply to insurance agents, who generally fall outside of SEC’s jurisdiction. Sales practices involving some high-risk insurance products, such as annuities, have drawn attention from federal and state regulators. A review by NAIC of consumers’ understanding of the standards of care with regard to the sale of insurance products could provide information on the extent of consumer confusion in the area and actions needed to address the issue. Second, we have seen that financial planners can adopt a variety of titles and designations. The different designations can imply different types of qualifications, but consumers may not understand or distinguish among these designations, and thus may be unable to properly assess the qualifications and expertise of financial planners. SEC’s recent changes in this area—requiring investment advisers to disclose additional information on professional designations and certifications they list—should prove beneficial. Another opportunity lies in SEC’s mandated review of financial literacy among investors. Incorporating issues of consumer confusion about financial planners’ titles and designations into that review could assist the agency in assessing whether any further changes are needed in disclosure requirements or other related areas. Finally, SEC has limited information about the nature and extent of problems specifically related to financial planners because it does not track complaints, examination results, and enforcement activities associated with financial planners specifically, and distinct from investment advisers as a whole. However, a regulatory system should have data sufficient to identify risks and problem areas and support decisionmaking. SEC staff have noted that additional tracking could require additional resources, but other opportunities may also exist to gather additional information on financial planners. Because financial planning is a growing industry and has raised certain consumer protection issues, regulators could potentially benefit from better information on the extent of problems specifically involving financial planners and financial planning services. We recommend that the National Association of Insurance Commissioners, in concert with state insurance regulators, take steps to assess consumers’ understanding of the standards of care with regard to the sale of insurance products, such as annuities, and take actions as appropriate to address problems revealed in this assessment. We also recommend that the Chairman of the Securities and Exchange Commission direct the Office of Investor Education and Advocacy, Office of Compliance Inspections and Examinations, Division of Enforcement, and other offices, as appropriate, to: Incorporate into SEC’s ongoing review of financial literacy among investors an assessment of the extent to which investors understand the titles and designations used by financial planners and any implications a lack of understanding may have for consumers’ investment decisions; and Collaborate with state securities regulators in identifying methods to better understand the extent of problems specifically involving financial planners and financial planning services, and take actions to address any problems that are identified. We provided a draft of this report for review and comment to FINRA, NAIC, NASAA, and SEC. These organizations provided technical comments, which we incorporated, as appropriate. In addition, NAIC provided a written response, which is reprinted in appendix II. NAIC said it generally agreed with the contents of the draft report and would give consideration to our recommendation regarding consumers’ understanding of the standards of care with regard to the sale of insurance products. NASAA also provided a written response, which is reprinted in appendix III. In its response, NASAA said it agreed that a specific layer of regulation for financial planners was unnecessary and provided additional information on some aspects of state oversight of investment advisers. NASAA also said that it welcomed the opportunity to continue to collaborate with SEC to identify methods to better understand and address problems specifically involving financial planners, as we recommended. In addition, NASAA expanded upon the reasons for its opposition to proposals that would augment oversight of investment advisers with an SRO. We are sending copies of this report to interested congressional committees, the Chief Executive Officer of FINRA, Chief Executive Officer of NAIC, Executive Director of NASAA, and the Chairman of SEC. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our reporting objectives were to address (1) how financial planners are regulated and overseen at the federal and state levels, (2) what is known about the effectiveness of regulation of financial planners and what regulatory gaps or overlap may exist, and (3) alternative approaches for the regulation of financial planners and the advantages and disadvantages of these approaches. For background information, we obtained estimates for 2000 and 2008, and projections for 2018, from the Bureau of Labor Statistics on the number of individuals who reported themselves as “personal financial advisers,” a term that the agency said was interchangeable with “financial planner.” The bureau derived these estimates from the Occupational Employment Statistics survey and the Current Population Survey. According to the bureau, the Occupational Employment Statistics’ estimates for financial planners have a relative standard error of 1.9 percent, and the median wage estimate for May 2009 has a relative standard error of 1.5 percent. Because the overall employment estimates used are developed from multiple surveys, it was not feasible for the bureau to provide the relative standard errors for these financial planner employment statistics. To estimate the number of households that used financial planners, we analyzed 2007 data from the Board of Governors of the Federal Reserve’s Survey of Consumer Finances. This survey is conducted every three years to provide detailed information on the finances of U.S. households. Because the survey is a probability sample based on random selections, the sample is only one of a large number of samples that might have been drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 2.5 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples that could have been drawn. In this report, for this survey, all percentage estimates have 95 percent confidence intervals that are within plus or minus 2.5 percentage points from the estimate itself. To identify how financial planners are regulated and overseen at the federal and state levels, we identified and reviewed, on the federal level, federal laws, regulations, and guidance applicable to financial planners, the activities in which they engage, and their marketing materials, titles, and designations. We also reviewed relevant SEC interpretive releases, such as IA Rel. No. 1092, Applicability of the Investment Advisers Act to Financial Planners, Pension Consultants, and Other Persons Who Provide Investment Advisory Services as a Component of Other Financial Services. We also discussed the laws and regulations relevant to financial planners in meetings with staff of the Securities and Exchange Commission (SEC), Financial Industry Regulatory Authority (FINRA), Department of Labor, and Internal Revenue Service. We also interviewed two legal experts and reviewed a legal compendium on the regulation of financial planners. At the state level, we interviewed representatives from the North American Securities Administrators Association (NASAA) and the National Association of Insurance Commissioners (NAIC) and reviewed model regulations developed by these agencies. In addition, we selected five states—California, Illinois, North Carolina, Pennsylvania, and Texas—for a more detailed review. We chose these states because they had a large number of registered investment advisers and varying approaches to the regulation of financial planners, and represented geographic diversity. For each of these states, we reviewed selected laws and regulations related to financial planners, which included those related to senior-specific designations and insurance transactions, and we interviewed staff at each state’s securities and insurance agencies. To identify what is known about the effectiveness of the regulation of financial planners and what regulatory gaps or overlap may exist, we reviewed relevant federal and state laws, regulations and guidance. In addition, we spoke with representatives of the federal and state agencies cited above, as well as FINRA and organizations that represent or train financial planners, including the Financial Planning Coalition, The American College, and the CFA Institute; organizations that represent the financial services industry, including the Financial Services Institute, Financial Services Roundtable, Securities Industry and Financial Markets Association, Investment Advisers Association, American Society of Pension & Professional Actuaries, National Association of Insurance and Financial Advisors, American Council of Life Insurers, Association for Advanced Life Underwriting, American Institute of Certified Public Accountants, American Bankers Association; and organizations representing consumer interests, including the Consumer Federation of America and AARP. We also spoke with selected academic experts knowledgeable about these issues. In addition, we reviewed relevant studies and other documentary evidence, including a 2008 study of the RAND Corporation that was commissioned by SEC, “Investor and Industry Perspectives on Investment Advisers and Broker-Dealers”; “Results of Investor Focus Group Interviews About Proposed Brokerage Account Disclosures,” sponsored by SEC; results of the FPA Fiduciary Task Force, “Final Report on Financial Planner Standards of Conduct”; “U.S. Investors & The Fiduciary Standard: A National Opinion Survey,” sponsored by AARP, the Consumer Federation of America, the NASAA, the Investment Adviser Association, the Certified Financial Planner Board of Standards, the Financial Planning Association, and the National Association of Personal Financial Advisors; and the 2009 National Financial Capability Study, commissioned by FINRA. We determined that the reliability of these studies was sufficient for our purposes. In addition, we reviewed relevant information on the titles and designations used by financial planners, including FINRA’s Web site that provides the required qualifications and other information about the designations used by securities professionals. We also obtained and reviewed available data on complaints and selected enforcement actions related to financial planners from the Federal Trade Commission, Better Business Bureau, and SEC. We collected from the Federal Trade Commission complaint data from its Consumer Sentinel Network database, using a keyword search of the term “financial planner” for complaints filed from 2005 to 2010. From the Better Business Bureau, we collected the number of complaints about the financial planning industry received in 2009. From SEC, we collected complaints from the agency’s Investor Response Information System that had been coded as relating to “financial planners” from November 2009 to October 2010. We also reviewed data from SEC’s Tips, Complaints, and Referrals database that resulted from a keyword search for the terms “financial planner,” “financial adviser,” “financial advisor,” “financial consultant,” and “financial counselor” from March 2010 to October 2010. In addition, at our request, SEC and NASAA provided us anecdotally with examples of enforcement actions related to individuals who held themselves out as financial planners. SEC identified 10 formal enforcement actions between August 2009 and August 2010 and NASAA provided us selected examples of state enforcement actions involving financial planners from 1986 to 2010 from 30 states. We gathered information on SEC- and state-registered investment advisers from SEC’s Investment Adviser Registration Database. FINRA did not provide us with data on complaints, examination results, or enforcement actions specific to financial planners; FINRA officials told us they do not track these data specific to financial planners. To identify alternative approaches for the regulation of financial planners and their advantages and disadvantages, we conducted a search for legislative and regulatory proposals related to financial planners, which have been made by Members of Congress, consumer groups, and representatives of the financial planning, securities, and insurance industries. We identified and reviewed position papers, studies, public comment letters, congressional testimonies, and other documentary sources that address the advantages and disadvantages of these approaches. In addition, we solicited views on these approaches from representatives of the wide range of organizations listed above, including organizations that represent financial planners, financial services companies, and consumers, as well as state and federal government agencies and associations and selected academic experts. We conducted this performance audit from June 2010 through January 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Jason Bromberg (Assistant Director), Sonja J. Bensen, Jessica Bull, Emily Chalmers, Patrick Dynes, Ronald Ito, Sarah Kaczmarek, Marc Molino, Linda Rego, and Andrew Stavisky made key contributions to this report.
Consumers are increasingly turning for help to financial planners-- individuals who help clients meet their financial goals by providing assistance with such things as selecting investments and insurance products, and managing tax and estate planning. The Dodd-Frank Wall Street Reform and Consumer Protection Act mandated that GAO study the oversight of financial planners. This report examines (1) how financial planners are regulated and overseen at the federal and state levels, (2) what is known about the effectiveness of this regulation, and (3) the advantages and disadvantages of alternative regulatory approaches. To address these objectives, GAO reviewed federal and state statutes and regulations, analyzed complaint and enforcement activity, and interviewed federal and state government entities and organizations representing financial planners, various other arms of the financial services industry, and consumers. There is no specific, direct regulation of "financial planners" per se at the federal or state level, but various laws and regulations apply to most of the services they provide. Financial planners are primarily regulated as investment advisers by the Securities and Exchange Commission (SEC) and the states, and are subject to laws and regulation governing broker-dealers and insurance agents when they act in those capacities. Federal and state agencies have regulations on marketing and the use of titles and designations that also can apply to financial planners. The regulatory structure applicable to financial planners covers the great majority of their services, but the attention paid to enforcing existing regulation can vary and certain consumer protection issues remain. First, consumers may be unclear about when a financial planner is required to serve the client's best interest, particularly when the same financial planner provides multiple services associated with different standards of care. SEC is studying these issues with regard to securities transactions, but no complementary review is under way by the National Association of Insurance Commissioners (NAIC) related to the sale of high-risk insurance products. Second, financial planners can adopt numerous titles and designations, which vary greatly in the expertise or training that they signify, but consumers may not understand or be able to distinguish among them. SEC has a mandated review under way on financial literacy among investors and incorporating this issue into that review could assist in assessing further changes that may be needed. Finally, the extent of problems related to financial planners is not fully known because SEC generally does not track data on complaints, examination results, and enforcement activities associated with financial planners specifically, and distinct from investment advisers as a whole. A regulatory system should have data to identify risks and problem areas, and given that financial planning is a growing industry that has raised certain consumer protection issues, regulators could benefit from better information on the extent of problems specifically involving financial planning services. A number of stakeholders have proposed different approaches to the regulation of financial planners, including (1) creation of a federally chartered board overseeing financial planners as a distinct profession; (2) augmenting oversight of investment advisers with a self-regulatory organization; (3) extending the fiduciary standard of care to more financial services professionals; and (4) specifying standards for financial planners and the designations that they use. While the views of stakeholder interests vary, a majority of the regulatory agencies and financial services industry representatives GAO spoke with did not favor significant structural change to the overall regulation of financial planners because they said existing regulation provides adequate coverage of most financial planning activities. Given available information, an additional layer of regulation specific to financial planners does not appear to be warranted at this time. GAO recommends that (1) NAIC assess consumers' understanding of the standards of care associated with the sale of insurance products, (2) SEC assess investors' understanding of financial planners' titles and designations, and (3) SEC collaborate with the states to identify methods to better understand problems associated specifically with the financial planning activities of investment advisers. NAIC said it would consider GAO's recommendation and SEC provided no comments.
As of September 1996, there were over 25,000 full-time physicians employed by the federal government. (See appendix IV.) Most of the physicians paid under title 5 were with the Departments of Health and Human Services (HHS) and Defense (DOD). Physicians paid under title 37 were with HHS’ Public Health Service Commissioned Corps or on military duty. Physicians paid under title 38 were with the Department of Veterans Affairs (VA). Although not eligible to receive PCAs under title 5, physicians paid under titles 37 and 38 are eligible to receive other types of special pay for physicians. Also, under a delegation of authority from OPM, some HHS physicians who receive basic pay under title 5 are eligible to receive special pay under title 38. As we agreed with you, our principal objectives were to (1) compare amounts paid to federal physicians under title 5 with amounts paid to physicians under other sections of the U.S. Code and with physicians in the private sector; (2) determine what other types of pay and benefits federally employed physicians receive; and (3) identify ongoing efforts by federal agencies that affect or have the potential to affect physicians’ pay. In addition to the primary comparisons required by our objectives, we also developed additional analyses of physicians’ compensation, which are discussed in appendix I. In our previous report on federal/private sector pay comparisons, we noted that experts in labor market analysis suggested that federal/private compensation comparisons that focus exclusively on pay may be misleading. A more complete analysis of total compensation would be needed to consider factors such as differences in pay plans and job responsibilities, federal restrictions limiting amounts of either basic or special pay, working conditions, job satisfaction, and risks of being laid off. This would apply to comparisons among federal positions as well. Because much of this information was not available from studies of physicians’ pay and because of the time constraints for completing this review, we only obtained pay-type information for federal and private sector physicians and did not assess these other factors. In addition, as agreed, we did not evaluate the significance of recruitment or retention problems upon which PCAs are based. Therefore, we did not attempt to make conclusions or recommendations on the sufficiency, size, or continued need for PCAs under title 5. On the basis of our preliminary work, we agreed to obtain physicians’ pay and benefit information for full-time federal physicians paid under titles 5, 37, and 38 of the U.S. Code. In doing our work, we interviewed officials from HHS, DOD, VA, OPM, and the Office of Management and Budget (OMB) to obtain descriptive information on the various types of pay and benefits that physicians received and on recent actions that affect or have the potential to affect physicians’ pay. HHS, VA, DOD, and the Commissioned Corps provided us with payroll information, which we used to make our comparisons. Unless otherwise stated, except for military physicians, federal physicians’ pay data in this report are for calendar year 1996. We did not verify the pay information we obtained. Our scope and methodology are described in greater detail in appendix III. We also purchased and reviewed several studies on physicians’ compensation that were prepared by private consulting firms. These studies contained pay information for physicians in various medical specialties who were employed primarily in group practices, health maintenance organizations (HMO), and hospitals. Except for pay data for physicians in various medical specialties, these studies did not contain information that would allow us to compare the pay of the private sector physicians with the pay received by federal physicians. We requested comments on a draft of this report from the Secretaries of HHS, Defense, and VA, and the Directors of OPM and OMB. The agencies’ comments are discussed at the end of this letter. We performed our review from December 1996 to August 1997 in accordance with generally accepted government auditing standards. We used several measures—averages, medians, and percentiles—to portray and compare the pay federal physicians received under titles 5, 37, and 38. Our principal analyses consisted of comparisons of (1) physicians’ pay—a combination of basic and special pay, (2) basic pay, (3) special pay, and (4) federal and private sector physicians’ pay for selected medical specialties. When measured by the average, physicians’ pay for HHS physicians who did not receive title 38 special pay was less than physicians’ pay of HHS and VA physicians who received special payments under title 38. In contrast, average physicians’ pay of HHS physicians exceeded the average pay of military and Commissioned Corps physicians who were paid under title 37. Table 1 shows, for government physicians paid under titles 5, 37, and 38, average amounts of basic and special pay combined. Where available, we also included information on maximum pay, medians, and pay at the 25th and 75th percentiles. According to DOD officials, the majority of military physicians do not make the military a career and usually leave active military duty after fulfilling all required service obligations for education and training. Approximately 70 percent of the physician force is evenly distributed between the O-3 (entry level) and O-4 ranks. This skews the presentation of “average salaries” of military physicians to relatively low amounts. Similar to total physicians’ pay, basic pay—one component of physicians’ pay—was the highest for HHS and VA physicians and lowest for physicians in the military. HHS physicians paid under title 5 and VA physicians paid under title 38 had average basic pays of $87,815 and $89,350, respectively. Average basic pay for physicians paid under title 37 was $43,110 for physicians in the military and $54,510 for Commissioned Corps physicians. Average basic pay received by physicians paid under titles 5, 37, and 38 of the U.S. Code is shown in table 2. Where available, maximum basic pay authorized or received by these physicians as well as median pay amounts and amounts paid to physicians at the 25th and 75th percentile are also shown. For physicians receiving special pay—the second component of physicians’ pay—average PCAs received by physicians under title 5 were lower than special pay received under titles 37 and 38. Large differences in average special pay to physicians—over $20,000—existed between HHS physicians who received PCAs ($15,760) and VA physicians and HHS physicians who received special pay under title 38 ($39,585 and $38,950, respectively). The average PCA of HHS physicians ($15,760) was also lower than the average special pay received by military and Commissioned Corps physicians, $35,190 and $43,260, respectively. Table 3 shows special pay averages for federal physicians. Where available, maximum special pay received by these physicians as well as the medians and amounts paid to physicians at the 25th and 75th percentile are also shown. Unlike HHS and DOD physicians who received only PCAs, physicians paid under title 37 and 38 were eligible to receive several types of special pay. However, not all physicians can receive each type of special pay. Table 4 shows, for the five different types of special pay paid under title 37, the number of Commissioned Corps and military physicians that received each type of special pay, the average amount of special pay received, and authorized maximum special pay amounts. Table 5 shows the authorized maximum amounts for each of the seven types of special pay paid under title 38 as the actual data were not readily available. Each type of special pay is described in more detail in appendix II. Special pay received by individual physicians paid under titles 37 and 38 could differ significantly because not all physicians received the same types or amounts of special pay. PCAs received by title 5 physicians varied less because PCAs may not exceed $20,000. In 1996, of the 1,193 full-time HHS physicians who had a full year of service, 830 received only PCA pay; 113 received only physicians’ special pay under title 38; and 135 received both types of special pay, but not at the same time during the year. Of the full-time physicians, 115, or less than 10 percent, did not receive PCAs or title 38 special pay. HHS and DOD physicians paid under title 5 were not eligible for specialty pay per se. Instead, these physicians could be eligible for PCAs based on agencies’ determinations that significant recruitment and retention problems existed for categories of physicians. According to PCA regulations (5 C.F.R. 595.103(b)), categories of physicians include those doing direct care, research, physical examinations, and administration of medical or health programs. Federal physicians paid under titles 37 and 38 and some HHS physicians eligible for title 38 special pay could receive special pay based on their certifications as specialists by one of the recognized American Medical or Osteopathic Specialty Examining Boards. Such special payments included board-certified pay, incentive and scarce-specialty pay, and multiyear special pay. In selected medical specialties in which large numbers of federal and private physicians practiced (general surgery, internal medicine, psychiatry, and family practice), our comparison of pay information from studies of private sector physicians’ pay and the pay of federal physicians who were paid under titles 37 and 38 showed that private sector physicians were generally paid more. In other specialties (e.g., thoracic surgery, radiology, and anesthesiology), private sector physicians were paid considerably more, based on information from these studies. Figure 1 shows for VA, DOD, Commissioned Corps, and private sector physicians the median or average pay for selected medical specialties (see table 6 for detailed dollar amounts). For other selected medical specialties—thoracic surgery, radiology, and anesthesiology—private sector physicians’ pay greatly exceeded the average pay of VA and military physicians. Comparisons of amounts paid to private sector physicians and VA and military physicians for these selected specialties are shown in figure 2 (see table 7 for detailed dollar amounts). In general, federal physicians’ pay is limited by (1) amounts provided under basic pay schedules, (2) maximum authorized special payment amounts, and (3) legislation stating that total pay cannot exceed specified executive pay levels. Private sector physicians’ pay would not generally be subject to these types of constraints. Other compensation for which physicians may be eligible included (1) nonwage compensation, such as health and retirement benefits; (2) premium pay, such as overtime; (3) incentive pay for hazardous duty; (3) other types of special pay, such as diving pay; (4) tax-free allowances, such as subsistence and housing; and (5) miscellaneous benefits, such as base exchange privileges. For the agencies that we reviewed, the cost of nonwage compensation (retirement and health and life insurance benefits) ranged from about 19 percent of basic pay for HHS physicians paid under title 5 to about 40 percent of basic pay for physicians in the military. In the private sector studies that we reviewed, comparable information on nonwage compensation was generally not available for the physicians studied. Some federal physicians paid under titles 5 and 37 also received other types of pay that were unrelated to their classification as a physician but related more to hours of duty or special skills possessed. For example, some HHS physicians paid under title 5 received overtime pay, which averaged $3,100 (see table I.4), and some physicians in the military received aviation career incentive pay averaging about $4,000 annually (see table I.5). In addition to nonwage compensation and these other types of pay, most physicians in the military and the Commissioned Corps received tax-free allowances, the most common of which were for housing and subsistence. Housing allowances averaged over $8,000 and subsistence allowances averaged about $1,800. Because these allowances were not subject to federal income tax, military and Commissioned Corps physicians also had an additional tax advantage. Even though the amounts of these other types of benefits, premium and incentive pays, and allowances were sizeable in some cases, we did not include them in our primary analysis because the cost of some benefits could not be readily quantified and because of the time constraints for completing our analysis. Appendix I contains additional details on these other types of compensation. We identified two recent actions that affect or have the potential to affect physicians’ pay. One involved a delegation of the use of certain title 38 personnel authorities to several agencies whose physicians are paid under title 5. The other involved VA’s exploration of the feasibility of recognizing physicians’ performance in establishing a new pay system. In November 1993, following an OPM study that identified problems in recruiting and retaining individuals in health care occupations, OPM delegated, under 5 U.S.C. 5371, the authority contained in certain title 38 personnel provisions to HHS, DOD, VA, and the Department of Justice. The provisions that were delegated related directly to pay rates and systems, premium pay, position classification, and hours of work. The purpose of this delegation was to give these departments additional flexibilities to maintain quality health-care staffs. As of May 1997, HHS was the only agency to have used this expanded authority to provide special pay to its physicians. Information on how HHS used this title 38 pay authority showed that, based on average pay, HHS physicians paid under title 38 earned $17,825 more than HHS physicians receiving PCAs. According to an HHS official, HHS agencies’ use of title 38 pay authorities has enabled HHS to remain reasonably competitive with salary levels in the private sector. Another HHS official said that budgetary constraints have limited the number of physicians who can receive title 38 special pay and have forced HHS to concentrate on positions for which recruitment and retention have historically been difficult. For example, the FDA has focused its efforts on providing title 38 special pay to supervisors and team leaders. Another action that has the potential to affect physicians’ pay involved VA. VA indicated that some thought was being given to changing the manner in which its physicians are paid. In 1995 and 1996, a task force appointed by VA’s Under Secretary for Health met to discuss the development of a new pay system for physicians and dentists. The task force’s objectives were to design a pay system that used a total salary concept that continued to consider local pay markets along with a new incentive pay component to reward exceptional performance and productivity. VA officials told us that, as of July 1997, the Veterans Health Administration was continuing to examine issues dealing with types of information available regarding local market pay and how to measure and link clinical performance and total salaries. Instead of physicians being automatically entitled to special pay by virtue of their length of service, geographic location, or medical specialty, the system under discussion would consider the local market, an individual’s performance and experience, and other relevant factors in determining physicians’ pay. VA officials also told us that, because of the complexity of physicians’ pay, modifications to the present system—which would require legislative action—are in the early stages of study. HHS and VA provided written comments on a draft of our report. DOD’s Deputy Director for Manpower and Support, Health Services and Readiness Support, Office of the Assistant Secretary of Defense for Health Affairs, and OPM’s Chief, Compensation Administration Division, Office of Compensation Policy, provided oral comments on our draft report on August 18 and 20, 1997, respectively. DOD, OPM, and VA said they generally agreed with the report’s contents. The agencies’ comments, which were essentially technical in nature or to clarify points, have been incorporated where appropriate. OMB was unable to provide official comments on our draft report within the timeframe we requested. However, we discussed and resolved comments of a technical nature with OMB staff familiar with physicians’ pay issues and made changes to the report where appropriate. We are sending copies of this report to the Chairmen and Ranking Minority Members of interested congressional committees; the Secretaries of HHS, Defense, and VA; and the Directors of OPM and OMB. Copies will be made available to others on request. Major contributors to this report were Larry Endy, Ed Tasca, Wayne Barrett, and Jessica Botsford. Please contact me at (202) 512-9039 if you have any questions concerning this report. In addition to the primary comparisons discussed in this report, we developed additional analyses of physicians’ compensation. This appendix presents average pay information, when it was available, based on physicians’ years of service and pay grades or position types. We also discuss other aspects of compensation in addition to basic and special pay, including (1) employers’ contributions to the costs of physicians’ retirement and health and life insurance benefits—also referred to as nonwage compensation; (2) other types of premium and incentive pay paid under titles 5 and 37 to some physicians; and (3) tax-free allowances for subsistence, housing, and other expenses paid under title 37 to most physicians in the military and the Commissioned Corps. Consistent with previous comparisons in this report, VA and HHS physicians receiving title 38 special pay were paid more than other federal physicians based on their years of service. For HHS physicians paid under title 5 and Commissioned Corps physicians paid under title 37, differences in pay narrowed as years of service increased. Average pay for HHS title 5 physicians and Commissioned Corps physicians was nearly the same for physicians with more than 20 years of service—$118,650 and $119,145, respectively. For military physicians, information on amounts of special pay was not readily available in the years-of-service groupings we requested. Figure I.1 shows comparisons of average physicians’ pay based on years of service (see table I.1 for dollar amounts). For executive-level pay, the average pay for VA physicians exceeded that of HHS title 5 physicians by $17,580, or about 12 percent. The average pay for these HHS physicians exceeded the average for Commissioned Corps physicians in O-7 and O-8 pay grades (Admiral) by $10,455, or about 8 percent, and military physicians in O-7, O-8, and O-9, pay grades (General) by $12,645, or about 10 percent. For staff-level pay, the average for VA physicians (first level supervisors, “chief grade,” and below) exceeded that of HHS title 5 physicians who were paid under the General Schedule by about 24 percent, or $30,405. The average pay for military and Commissioned Corps physicians was 39 and 24 percent less than the average for VA physicians. The average pay for HHS physicians was about the same as the average for Commissioned Corps physicians and about 20 percent more than the average for physicians in the military. Physicians’ pay for military and Commissioned Corps physicians varied considerably by pay grade. Pay for Commissioned Corps physicians in the O-3 pay grade averaged $39,030 and in the O-6 pay grade averaged $118,845. According to DOD officials, a large percentage of military physicians at the O-3 level are in graduate medical education programs in either internships or residency training. These physicians were not eligible for incentive or multiyear special pays that averaged about $27,000 and were available to military physicians in higher pay grades. Figure I.2 shows physicians’ pay by type of position (see table I.2 for detailed dollar amounts). Federal physicians may also be eligible for other types of compensation, such as nonwage compensation, incentive or premium pay, or allowances. In some cases, only small numbers of federal physicians receive these benefits. In the case of nonwage compensation for both federal and private sector physicians, benefit costs were not always quantified. When they were quantified, available estimates of benefit costs generally were not calculated in a consistent fashion that permitted meaningful comparisons among categories of physicians, either within the federal sector or between the federal and private sectors. We therefore did not include information on these other benefits in our principal comparisons. Nonwage compensation includes employers’ retirement benefit contributions and employers’ shares of physicians’ health and life insurance costs. Nonwage compensation paid by federal agencies on behalf of their employees ranged from 19 to 40 percent of basic pay. Studies of private sector physicians that we reviewed did not contain similar information on nonwage compensation. However, a 1994 Department of Labor study of employee benefits showed that, for white-collar professional workers in the private sector, the cost of employer-provided nonwage compensation (i.e., health and life insurance, retirement, social security, and workers’ compensation benefits) was about 23.4 percent of basic pay. Federal physicians paid under titles 5 and 38 are required to pay for a share of their nonwage compensation. Private sector organizations may also require their physicians to contribute toward the costs of these benefits. Information on the costs of nonwage compensation that physicians received under titles 5, 37, and 38 and according to private studies follows. Title 5 Physicians: Based on information from the HHS payroll system, the government’s costs for nonwage compensation averaged about $16,480 or about 19 percent of basic pay for the HHS physicians paid under title 5. This amount included the government’s share of retirement benefit costs under the Civil Service Retirement System (CSRS) or the Federal Employees Retirement System (FERS) as well as the government’s share of health and life insurance costs. A more general measure of indirect cost is contained in OMB Circular A-76. This circular states that in 1996 the standard cost factor for federal civilian employees’ retirement benefits was 23.7 percent, for life insurance and health benefits it was 7.05 percent, and for miscellaneous fringe benefits it was 1.7 percent. Title 38 Physicians: According to a VA official, the total cost of nonwage compensation for its employees was about 25 percent of basic and special pay. In 1996, average basic and special pay for VA physicians was about $128,540. Based on VA estimates, nonwage compensation would be about $32,135 if these benefits averaged 25 percent of physicians’ pay. The dollar value of nonwage compensation for VA physicians was higher than the dollar value of these benefits for HHS physicians paid under title 5. Physicians paid under title 38 can include physicians’ special pay as part of basic pay in determining retirement annuities under CSRS and FERS. In contrast, PCAs earned by title 5 physicians and special payments to physicians in the military and the Commissioned Corps are not considered part of basic pay for retirement benefit calculation purposes. Title 37 Physicians: For fiscal year 1996, the DOD actuary estimated the cost of military retirement benefits to be 32.9 percent of basic pay. Costs of other nonwage compensation for military personnel were 1.45 percent for medicare hospital insurance benefits and 6.2 percent for old age, survivors, and disability insurance benefits on basic pay up to $62,700. Physicians in the military and the Commissioned Corps and their dependents may receive free health care benefits in military facilities, but DOD has not developed, in terms of basic pay, information on the cost of providing these benefits. Physicians in the military and the Commissioned Corps also may receive other benefits and privileges that are neither easily quantified nor readily susceptible to comparison. Examples of several of these benefits include eligibility to purchase goods and services at military base commissaries and exchanges at prices generally lower than those charged by commercial facilities and access to military service clubs and other DOD-sponsored recreational facilities. Further, these physicians have the option of declaring a state of residence, regardless of where they are actually stationed or the length of time they spend in that state. This can be of significant value for those selecting residency in states with no personal income taxes. While these benefits and privileges can be of considerable value to physicians in the military and the Commissioned Corps, we did not attempt to estimate their comparative values or costs. Private Sector Physicians: Information from studies of private sector physicians showed that the organizations that were surveyed also provided physicians with nonwage compensation. Table I.3 shows information on the types of benefits provided and the percentage of survey respondents providing the benefit. These studies did not provide information on total employer costs of these benefits or on the cost of these benefits as a percentage of basic pay. Sullivan, Cotter, and (January 1997 data) 1996 data) Total percentage providing retirement benefits not reported. Percentage receiving each type of retirement benefit not reported. In addition to the benefits shown in table I.3, other benefits for which some organizations provided payment included vision care, professional organizations’ dues, continuing education expenses, personal time off, and flexible benefit plans. Federal agencies may also pay educational and training expenses for their physicians. The HHCS study indicated that about 60 percent of the 179 participant organizations that responded offered a “fixed” set of employee fringe benefits, 32 percent offered a “flexible” or “cafeteria” set of benefits, and 8 percent offered both fixed and flexible benefits. Flexible benefit or cafeteria plans are generally not available to federal employees. Some physicians paid under titles 5 and 37 also received premium, incentive, or other types of pay based on factors such as hours of duty, possession of special skills, performance, or working under extreme conditions. These other types of pay were not based on an individual’s occupation as a physician but were earnings based on the above-mentioned circumstances. For those physicians receiving these types of pay, compensation was increased by amounts that ranged from a few dollars to over $24,000. Based on agency-provided information, tables I.4 and I.5 list for physicians paid under titles 5 and 37 (1) other types of pay; (2) numbers of physicians receiving the pay; (3) average amounts received; and (4) where available, ranges of compensation received. Title 5 Physicians: Table I.4 contains information on other types of pay earned by at least 50 HHS physicians. Physicians paid under title 5 may also be eligible for other payments, such as recruitment bonuses, relocation allowances, and certain cost-of-living allowances (COLA) that are not identified in table I.4. Title 37 Physicians: Other types of pay that physicians in the military received are listed in table I.5. Some physicians may be eligible to simultaneously receive more than one type of incentive or special pay. Unlike some title 5 physicians, physicians in the military and the Commissioned Corps do not receive compensation for working overtime, at night, on Sundays, or on holidays. Title 38 Physicians: According to a VA official, VA full-time physicians are considered to be on duty for 24 hours a day. As such, they are not eligible for premium pay, such as overtime and for work on Sunday or at night. VA physicians also do not receive locality-based comparability payments under 5 U.S.C. 5304; however, they may receive geographic location pay if they work in areas where extraordinary recruitment or retention difficulties exist. They may also be eligible for recruitment bonuses and retention allowances under 38 U.S.C. 7410. Physicians in the military and the Commissioned Corps may be entitled to a variety of different allowances related to such elements as subsistence, housing, family separation, and COLAs. Table I.6 shows, for these physicians, average annual allowances received and the number of physicians receiving them. Allowances are tax free and generally vary depending on marital status, family size, and pay grade. Differences in basic subsistence amounts were due to differences in time periods for which the data was collected. Variable housing allowances supplement basic quarters allowances for service members who reside in high-cost areas in the United States. Because allowances are not subject to federal income tax, physicians in the military and the Commissioned Corps also receive a tax advantage that can be expressed as the additional income that they would have to receive in order to be left with the same net take-home pay, if allowances were taxable. Because many allowances vary by pay grade and number of dependents, the tax advantage varies. For example, according to January 1996 military compensation pay tables, the tax advantage for an officer in the O-4 pay grade ranged from $1,889 to $3,813, depending on family size and years of service. For an officer in the O-6 pay grade, it ranged from $2,168 to $4,777. In addition to basic pay, federal physicians may be entitled to various special payments, depending on the laws under which they are paid. Each of the laws—title 5 for most civilian physicians in federal agencies other than VA, title 38 for VA physicians and selected title 5 physicians, and title 37 for physicians in the military and the Commissioned Corps—spells out (1) requirements that are to be met for physicians to receive special payments and (2) dollar ranges for these payments. Special payments may be provided in varying amounts based on different factors, such as a physician’s years of service, medical specialty or category of service, geographic location, or length-of-service agreement. In general, physicians who received special payments under titles 37 and 38 received more types and higher amounts of these payments than physicians paid under title 5. The following sections discuss these special payment provisions and the roles of federal physicians paid under these laws. Federal physicians paid under title 5 may be eligible for PCAs if agencies document significant recruitment and retention problems and if physicians enter into a service agreement with their employing agencies. These agreements require physicians to complete periods of service of 1 or 2 years. The maximum allowance is $14,000 per year for physicians with less than 24 months of federal service and $20,000 for physicians with more than 24 months of service. While these are the maximum amounts authorized by law, some agencies have established schedules that limit PCAs based on the characterization of the positions in which physicians serve. For example, for physicians with more than 24 months of service, the maximum allowance is $10,000 for occupational health physicians and for physicians performing disability evaluations, according to HHS’ personnel manual. PCAs are not considered basic pay for purposes of calculating premium pay (e,g., overtime, night, and holiday pay), payments for accumulated and accrued annual leave and severance pay, compensation for work injuries, or retirement and life insurance benefits. Table II.1 shows the agencies employing the most physicians under title 5 and the percentage of physicians receiving PCAs in fiscal year 1996. Other agencies (14) The number of physicians eligible for PCAs in the 14 other agencies ranged from 1 to 35; 8 of these agencies employed fewer than 10 physicians. Five of the 14 agencies did not provide PCAs to any of their physicians. Officials with the National Aeronautics and Space Administration, with 27 eligible physicians, and the Tennessee Valley Authority, with 4 eligible physicians, told us that PCAs were not necessary to retain or recruit physicians in their agencies. Title 38 authorizes several different special payments for physicians. While most of the physicians receiving special payments under title 38 are employed by VA, OPM has authorized agencies (HHS, DOD, and the Department of Justice) that pay physicians under title 5 to pay selected physicians’ special payments under title 38. Physicians receiving these payments are not eligible to receive PCAs. Under title 38, physicians may be eligible for one or more of the following types of special payments. Full-time status: Physicians who have full-time status are entitled to a special payment of $9,000 annually. Length of service: Physicians are entitled to length of service awards that range from $4,000 for 2 to 4 years of service to $25,000 for 12 or more years of service. VA has established a schedule for length of service pay, which is shown in table II.2. Scarce medical specialty: Physicians serving in medical specialties for which there are extraordinary recruitment and retention difficulties may receive payments of up to $40,000. Physicians serving in executive positions in VA’s headquarters office are prohibited from receiving scarce specialty pay. Responsibility pay: Physicians serving in executive positions either in field offices or in VA’s headquarters office may be eligible for amounts ranging from $4,500 for a service chief to $45,000 for the Under Secretary of Health, based on the specific position in which they serve. Board certification: Physicians are entitled to a special payment of $2,000 if they are board certified. If they are certified in a subspecialty or secondary board, they are entitled to an additional $500. Geographic location: Physicians serving in specific geographic locations where extraordinary recruitment or retention difficulties exist are eligible for geographic location pay of up to $17,000 annually. Exceptional qualifications: VA’s Under Secretary of Health may approve, on a case-by-case basis, special payments at an annual rate of not more than $15,000 for physicians with exceptional qualifications within a specialty. HHS, DOD, and the Department of Justice, which pay physicians under title 5, were authorized, under a delegation of title 38 pay authority from OPM, to use the same categories of special payments to physicians described in the previous section in paying their physicians. OPM delegated title 38 pay authority to these agencies and VA in November 1993 to provide them with added flexibilities needed to maintain a quality health care staff. HHS agencies began using its delegated authority to make special payments to physicians under title 38 in August and September 1995. DOD has formalized a plan for the title 38 special pay authority, but as of April 1997, it had not used the authority to pay its physicians. Justice has not formalized a plan for using this authority. On June 27, 1997, OPM extended title 38 pay authority to HHS, DOD, and Justice through June 30, 2002. HHS guidelines for implementing OPM’s delegation of authority provide for almost identical special payment amounts using criteria similar to those used by VA. HHS limits special payments for length of service to $18,000 compared with $25,000 for VA. Also, by law (5 U.S.C. 5371(c)(1)), members of the Senior Executive Service are not eligible for special payments under title 38. Under title 37, physicians in the military and Commissioned Corps physicians are eligible for types of special payments for physicians similar to those available to title 38 physicians. Special payments and the amounts authorized are discussed below. Variable special pay: Physicians in the military and Commissioned Corps physicians are entitled to variable special pay. Variable special pay is paid monthly and ranges from $1,200 annually for interns to $12,000 annually for officers with 6 but less than 8 years of service. After 8 years, this pay declines based on the theory that future retirement benefits and other types of special payments will serve as greater incentives for physicians to stay on active duty. Board-certified special pay: Physicians who are board-certified in their respective specialties are entitled to amounts ranging from $2,500 to $6,000 annually, based on their years of service. Physicians with less than 10 years of service receive $2,500 annually; physicians with 18 or more years of service receive $6,000 annually. Board certified pay is paid monthly. Additional special pay: Physicians who sign an agreement to serve at least 1 additional year from the effective date of their service agreements are entitled to $15,000 annually, which is paid at the beginning of the 1-year period. Physicians who are undergoing internships or initial residency programs do not qualify for additional special payments. Multiyear special pay: Physicians who are fully qualified in designated medical specialties are eligible to enter into written agreements to provide 2, 3, or 4 more years of service. The duration of the agreement determines the amount payable. Annual amounts ranging from $2,000 to $14,000 are payable upon acceptance of the agreement and on the anniversary of the agreement. To receive multiyear special pay, physicians must have completed any service commitment incurred for medical education and training or completed 8 years of service. In either case, these physicians must be below the pay grade of O-7 (General or Admiral). Every year, the Assistant Secretary of Defense for Health Affairs convenes a Triservice Flag Officer Review Board to determine the annual amount provided for each specialty, to be based primarily on the staffing level in each specialty community. In fiscal year 1996, physicians who signed 4-year agreements could receive $14,000 for each year of the agreement in medical specialties, such as family practice, orthopedic surgery, emergency medicine, internal medicine, and urology. Other medical specialties received less. Also, multiyear special pay was not available for all specialists; for example, in fiscal year 1996, anesthesiologists and physicians in the pediatric and internal medicine subspecialties were not eligible to sign agreements for multiyear special pay under title 37. Incentive special pay: Physicians who sign an agreement to remain on active duty for at least 1 year and who are fully qualified in medical specialties designated as critical and practice in that specialty a substantial portion of the time or who meet other criteria related to their assignment may be authorized to receive up to $36,000 in incentive special pay. Physicians must be in pay grade O-6 and below to receive this pay, which is a lump-sum payment at the beginning of the 12-month period. The Flag Officer Review Board annually determines the authorized amount of incentive special pay for each specialty. Federal physicians serve in a variety of categories and medical specialties, depending on the mission of the employing federal agency and the needs of the population served. The following sections describe, by pay authority, the roles that these physicians fill in their employing agencies. Federal regulations related to PCAs (5 C.F.R 595) require heads of agencies to establish, as a minimum, the following separate categories of physicians for purposes of determining if there are significant recruitment and retention problems. Category 1: Positions primarily involved in the practice of medicine or direct service to patients in hospitals, clinics, public health programs, diagnostic centers, and similar settings. Category 2: Positions primarily involved in the conduct of medical research and experimental work or the identification of causes or sources of diseases or disease outbreaks. Category 3: Positions primarily involved in the evaluation of physical fitness or the provision of initial treatment of on-the-job illness or injury. Category 4: Positions not described above, including positions involving disability evaluation and rating, training, or the administration of patient care or medical research and experimental programs. PCAs may be paid only to physicians serving in positions in categories determined by the agency to have significant recruitment and retention problems. Table II.3 shows, for the agencies with the most physicians being paid under title 5, the number of physicians in each category and the number receiving PCAs. Table II.3: Number of Physicians by Category and Number Receiving PCAs (Fiscal Year 1996) (Direct care providers) (Researchers) (Fitness examiners) disability examiners) In addition to HHS physicians who were paid under title 5, some physicians in HHS were eligible for special payments under title 38. In fiscal year 1996, HHS provided physicians’ special payments under title 38 to 294 physicians. Most of these physicians were employed in the Food and Drug Administration (FDA), Indian Health Service (IHS), and NIH. Even though physicians in the federal government are paid under a number of different pay plans, there is some commonality in the types of positions they fill in the agencies for which they work. The following section illustrates the roles of federal physicians. Titles 5, 37, and 38 physicians in HHS: HHS physicians may be paid as civilians under title 5 and as Commissioned Corps personnel under title 37. In addition, some title 5 physicians received special payments under title 38 as a result of the previously mentioned delegation of authority. Examples of physicians’ roles in the HHS agencies that employ the most physicians are listed as follows: FDA: Approximately 95 percent of FDA’s physicians are involved in researching and evaluating the clinical research data related to technology assessment, investigational studies, or marketing of medical/patient care services or products. According to OMB’s annual report on PCAs, FDA competes with pharmaceutical companies for physicians qualified to support the regulation of food, prescription and over-the-counter drugs, and medical devices. NIH: NIH physicians are involved in intramural medical research, extramural and collaborative research, or the administration of these programs. NIH competes with the academic community and with private sector pharmaceutical firms for physicians with outstanding research skills. IHS: IHS provides a comprehensive health services delivery system, including hospital and ambulatory medical care and prevention and rehabilitation services, for American Indians and Alaska Natives. Much of the population served by IHS is scattered over long distances and in remote areas. IHS physicians are paid as civil servants under title 5 or as Commissioned Corps officers under title 37. IHS has 914 physicians and administers 37 hospitals and numerous health centers. Centers for Disease Control (CDC): Physicians at CDC provide leadership and direction in areas such as the prevention of infectious and chronic diseases, environmental health, occupational safety, international health, epidemiologic and laboratory research, data analysis and information management, and health promotion. Title 38 physicians at VA: According to VA payroll system data, VA had over 7,300 full-time physicians that had been employed for at least 1 year, as of December 1996. VA physicians serve in the largest federal medical-care delivery system in the United States, providing care to over 2.9 million patients in 1996. These physicians have training in numerous specialties and provide inpatient and outpatient hospital subacute, rehabilitative and psychiatric care, and residential and nursing-home care. In addition to providing patient care, numerous VA physicians are involved in administering its facilities; conducting basic, clinical, epidemiological, and behavioral research; and training medical residents and students. Title 37 Commissioned Corps physicians: Commissioned Corps payroll data indicated that, as of December 1996, the Corps had over 1,450 full-time physicians who had been on duty for at least 1 year. Approximately 1,210 or 80 percent of these physicians were with the following HHS agencies: NIH (473), IHS (337), CDC (331), and FDA (69). Other Corps physicians either were with the remaining HHS agencies or were detailed to such other federal agencies as the Bureau of Prisons and the Coast Guard. In a 1996 report on the Commissioned Corps, we noted that Commissioned Corps officers and federal civilian employees often had similar duties and some—physicians, nurses, and pharmacists—had identical duties. Title 37 military physicians: As of September, 30, 1996, the Army, Navy, and Air Force had about 13,000 military physicians. About 12 percent of these physicians had graduated from the Uniformed Services University of the Health Sciences, a 4-year, tuition free medical school established by DOD in response to the Department’s need to attract and retain physicians. About 80 percent of the 13,000 physicians received financial assistance for their medical education in civilian medical schools under DOD’s Health Professions Scholarship Program. The remaining physicians were brought into the military through direct accession. A wide range of medical specialties are needed to support operational forces during times of war and other military operations and to maintain and sustain the well-being of the fighting forces in preparation for war. Military physicians also provide health care services to nonactive duty beneficiaries and to the dependents of active and nonactive duty personnel. Furthermore, military physicians also contribute to research efforts conducted in areas such as Acquired Immune Deficiency Syndrome, breast cancer, and blood research. As of September 1996, of the 13,000 DOD physicians, over 1,000 were serving internships, and about 2,900 were in specialty training programs. Differences exist between military physicians paid under title 37 and DOD civilian physicians paid under title 5. According to DOD officials, the DOD civil service system is structured to hire physicians primarily at the GS-13/14/15 levels where experience requirements and pay are significantly higher than they are for military physicians at the O-3 and O-4 levels (approximately 70 percent of the military physicians). The civil service does not have a significant attrition rate when compared with the attrition rate of junior military physicians. The result of the different employment situations is that most civilian physicians are employed at grade levels comparable with military pay grades O-5 and O-6. Because the authority to enter agreements to pay PCAs is due to expire on September 30, 1997, Representative Constance A. Morella requested a report on federal and private sector physicians’ pay and benefits. Following discussions with her office, we agreed that our principal objectives would be to (1) compare amounts paid to federal physicians under title 5 to amounts paid under other sections of the U.S. Code with each other and with amounts paid to private sector physicians, (2) determine what other types of pay and benefits federally employed physicians receive, and (3) identify ongoing agency efforts that have the potential for affecting federal physicians’ pay. To compare physicians’ pay, we obtained and analyzed information on physicians’ basic pay and on any special payments that were available only to physicians. For purposes of this report, basic pay means a rate of pay established under titles 5, 22, 37, or 38, including a special salary rate under 5 U.S.C. 5305 (or similar authority) and a locality-adjusted rate of pay under 5 U.S.C. 5304. On the basis of our preliminary work, we agreed to obtain physicians’ pay and benefit information for full-time federal physicians paid under title 5, most of whom were with HHS or DOD; physicians with the Public Health Service’s Commissioned Corps or on military duty and paid under title 37; physicians with VA and paid under title 38; and title 5 physicians with HHS who received special payments under title 38. We also reviewed federal statutes and regulations on special pay that is available only to physicians, for information on the various types of special pay that physicians could receive. HHS, VA, and the Commissioned Corps provided us with actual annualized pay data for full-time federal physicians with over 1 year of service for the 12-month period ending December 1996. Payroll information for military physicians was obtained from Defense Finance and Accounting Service pay centers for the Army and Air Force. Army and Air Force information was for the 12-month periods ending January and March 1997, respectively. The same comparable data used for the Army and Air Force could not be obtained from the Navy pay and personnel system. According to DOD officials, because all three military departments use the same pay rates, Army and Air Force data provide a fairly accurate representation of pay for military physicians. Unless reported separately, amounts and averages for DOD military physicians were based on combined information from the Army’s and Air Force’s payroll systems. According to data from the Defense Manpower Data Center, as of September 1996, of the 13,051 physicians in the military, 8,955 were in the Army and Air Force. From the information provided by the Army and Air Force, we could not determine median pay amounts or identify amounts paid at the 25th and 75th percentile for military physicians. We did not independently verify the accuracy of any pay data we obtained. DOD’s Directorate of Compensation prepares extensive pay-related information in the form of pay tables for all military personnel. These tables show for each pay grade, longevity step, and family size, information such as basic pay, quarters and subsistence allowances, and Social Security and federal income tax withholdings. Also, DOD health affairs staff have developed information showing estimated amounts of special pay that military physicians in various pay grades are likely to earn on an annual basis. This information, together with staffing data, can be used to estimate amounts paid to DOD’s military physicians. We sought actual rather than estimated payroll data from all agencies for our primary analyses because these data would more accurately reflect the pay of physicians. We also sought actual data to avoid the potential difficulty of comparing data among agencies that used different estimating methodologies or of comparing estimated and actual data provided by these agencies. However, actual pay data in the formats and timeframes we specified were not always readily available. For information on DOD physicians paid under title 5, DOD program officials for the Army and Navy provided us with estimated annualized pay data based on information for an April 1997 pay period, which was multiplied by 26—the number of pay periods in 1 year. The Air Force obtained annual physicians’ pay data from its personnel data system. We used the data provided to avoid the additional time that would have been involved in collecting actual pay data for these physicians from multiple pay centers. For pay and benefits data for private sector physicians, we identified physician compensation studies listed in Modern Healthcare. From the studies listed and a discussion with the author of the above-mentioned article, we judgmentally selected and purchased four studies that contained information on physicians employed primarily in group practices, HMOs, and hospitals. We selected these studies because they contained information on (1) physicians practicing in settings that were similar to those in which federal physicians practiced, rather than information on physicians operating as sole practitioners and (2) amounts paid to physicians as salary and direct compensation, rather than as net income. Because these studies did not contain information on the median pay of all physicians surveyed, we were unable to make across-the-board comparisons between federal physicians’ pay and private sector physicians’ pay. However, these studies contained information on median pay for physicians in selected medical specialties. Where possible, we compared the pay of VA, Commissioned Corps, and military physicians with the pay of private sector physicians in the following medical specialties—general surgery, internal medicine, psychiatry, and family practice. These were medical specialties where large numbers of federal and private sector physicians practiced. The organizations that prepared these pay surveys have been conducting physician pay studies from 4 to 21 years. We did not independently verify the data shown in these studies. Information on the scope of these studies is shown in table III.1. Number of medical specialties covered 324 hospitals, group practice facilities, and HMOs 192 hospitals, group practices, and HMOs The HHCS study contained information on government and nongovernment physicians. We used the information on nongovernment physicians. The preparers of two of these studies cautioned users of their reports that the data provided by responding medical practices might not be representative of all physicians or all medical groups because the data were not based on a random sample of medical groups. Two of these preparers also recommended the use of medians in evaluating physicians’ pay, because the median is not subject to the distortion that may occur in the mean (average) when extremely high or low values are included in the data set. We therefore used medians when presenting private sector data. However, we used averages in our analyses of federal data because median pay was not available for all federal physicians’ groups we compared. Also, in most cases, the averages differed only slightly from the medians. To determine other types of pay and benefits received by federal physicians, we reviewed (1) federal statutes and regulations on pay and benefits that physicians could receive in addition to their basic and special pay and (2) actual pay information provided by the agencies reviewed. We also asked these agencies for information on the cost of other compensation that the government paid for, either in whole or in part, but which was not included in physicians’ pay. For example, employer costs include amounts paid for Social Security and other federal retirement benefits and for the government’s share of costs for employees’ health and life insurance benefits. After receiving the data from these agencies, and depending on how the agencies formatted the data, we made additional calculations or reformatted the data to make it as consistent as possible. Regarding the objective to identify ongoing agency efforts that could potentially affect physicians’ pay, we asked agency officials if they were involved in activities that had the potential for affecting physicians’ pay. In addition to the limitations indicated above: We did not make a determination on whether PCAs should be increased or whether there should be a minimum comparability allowance because we did not collect and compare information on such factors as (1) physicians’ duties and responsibilities, (2) amounts of supervision physicians either received or provided, and (3) actual retention and recruitment concerns experienced. The military service pay centers for the Army and Air Force provided us with total amounts of basic pay, special pay for physicians, incentive pay, and allowances. From this information, we could calculate averages but could not determine median or percentile pay. Because of the small numbers of physicians involved and the specialized reasons for which they were hired, we did not compare pay and benefit data for physicians employed by the Uniformed Services University of the Health Sciences under 10 U.S.C. 2113 (f)(1) or in the Senior Biomedical Research Services under 42 U.S.C. 237. We requested comments on a draft of this report from the Secretaries of HHS, Defense, and VA; and the Directors of OPM and OMB. Written comments from HHS and VA and oral comments from DOD and OPM were incorporated in the report, where appropriate. Similarly, we incorporated comments from OMB staff familiar with physicians’ pay issues. We did our work in Washington, D.C., from December 1996 to August 1997 in accordance with generally accepted government auditing standards. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on pay and benefits of physicians employed by the federal government and in the private sector, to be used in considering reauthorization of the Federal Physicians Comparability Allowance Act (5 U.S.C. 5948). GAO did not attempt to make conclusions or recommendations on the sufficiency, size, or continued need for physician comparability allowances (PCA). GAO noted that: (1) the average annual pay for Department of Health and Human Services (HHS) physicians paid under title 5 was: (a) 17 percent less than the average for HHS physicians who received special pay under title 38; (b) 21 percent less than the average for the Department of Veterans Affairs (VA) physicians paid under title 38; (c) 4 percent greater than the average for Commissioned Corps physicians; and (d) 23 percent greater than the average for physicians in the military; (2) the average pay for title 5 HHS physicians was $101,660; (3) for HHS physicians who received PCAs, average pay was $104,730 compared with $79,485 for those who did not receive a PCA or special pay under title 38; (4) the average pay for Department of Defense (DOD) physicians paid under title 5 was $86,760; (5) for DOD physicians receiving a PCA, average pay was 89,710; (6) for VA and HHS physicians who received special pay under title 38, average pay was $128,540 and $122,555, respectively; (7) average pay for physicians in the military (Army and Air Force) and the Commissioned Corps, both of whom were paid under title 37, was $78,250 and $97,770, respectively; (8) in general, physicians paid under titles 37 and 38 were eligible for and received more types and higher amounts of special pay than HHS and DOD physicians receiving PCAs under title 5; (9) the average PCA for HHS physicians was $15,760 and the average PCA for DOD physicians paid under title 5 was $12,505; (10) average special pay amounts for VA and HHS physicians receiving title 38 special pay were $39,585 and $38,950, respectively; (11) average special pay amounts for physicians in the military and the Commissioned Corps were $35,190 and $43,260, respectively; (12) for selected medical specialties, GAO comparisons of pay information from studies of private sector physicians' pay with the pay of federal physicians who were paid under titles 37 and 38 showed that private sector physicians were generally paid more; (13) in addition to basic pay and physicians' special pay, federal and private sector physicians were eligible for employer-provided nonwage compensation; (14) regarding ongoing efforts that affect or have the potential to affect physicians' pay, GAO identified two recent initiatives; (15) since November 1993, the Office of Personnel Management (OPM) has delegated authority to HHS, DOD, and the Department of Justice allowing them to provide title 38 special pay to their physicians; (16) as of May 1997, HHS was the only agency to have used this authority; and (17) VA is currently exploring the feasibility and appropriateness of linking physicians' pay with performance.
Currently, there are over 1 billion undernourished people worldwide, according to FAO. This number is greater than at any time since the 1996 World Food Summit, when world leaders first pledged to halve the number of the world’s hungry, and has been steadily increasing since the mid- 1990s, even before the food and fuel crisis of 2006 through 2008 and the current economic downturn. Based on FAO’s most recent data, Sub- Saharan Africa and South Asia had the most severe and widespread food insecurity as of 2004-2006. Outside these two regions, Haiti, the least developed country in the Western Hemisphere and one of the poorest countries in the world, had extremely high levels of hunger and food insecurity, which have been further exacerbated by the January 2010 earthquake. In absolute numbers, more hungry people lived in South Asia than in any other region, whereas the most concentrated hunger was found in sub- Saharan Africa, which had 16 of the world’s 17 countries where the prevalence of hunger was 35 percent or higher. The 17th country was Haiti, where 58 percent of the population lived in chronic hunger. According to FAO’s data for 2004-2006, since 1990, the proportion of undernourished people has declined from 34 to 30 percent in sub-Saharan Africa, from 25 to 23 percent in South Asia, and from 63 to 58 percent in Haiti. However, during this period, the actual number of undernourished people has increased: from 169 million to 212 million in sub-Saharan Africa, from 286 million to 337 million in South Asia, and from 4.5 million to 5.4 million in Haiti—a number that is likely to grow further due to the earthquake. In 1996, the United States and about 180 world leaders pledged to halve hunger by 2015. In 2000 they reaffirmed this commitment with the establishment of the UN Millennium Development Goals and, more recently, at the World Summit on Food Security held in Rome in November 2009. As shown in figure 2, both the international donor community and the U.S. government have undertaken a number of key initiatives over the years in their efforts to address global food insecurity. The global food price crisis in 2007 and 2008 spurred new initiatives to address the growing prevalence of hunger. In their efforts to advance global food security, U.S. agencies work with numerous development partners. These include host governments, multilateral organizations, and bilateral donors, as well as other entities such as NGOs, philanthropic foundations, private sector organizations, and academic and research organizations. Their roles and types of activities include the following: Host governments. At the country level, host governments generally lead the development of a strategy for the agricultural sector and the coordination of donor assistance. They typically issue a poverty reduction strategy paper that outlines their country development plans and a national action plan to alleviate poverty, both elements considered indicators of national ownership of the development approach. Donors are committed under the Paris Declaration to align their assistance with national development strategies of the host country. Host governments may also participate in efforts at the regional level. For example, in 2003, members of the African Union endorsed the implementation of the Comprehensive Africa Agriculture Development Program (CAADP), a framework that is aimed to guide agricultural development efforts in African countries, and agreed to allocate at least 10 percent of government spending to agriculture by 2008. Multilateral organizations. Several multilateral organizations and international financial institutions implement a variety of programs in the areas of agricultural development and food security. IFAD and other international financial institutions play a large role in providing funding support for agriculture. Together, the World Bank, IFAD, and the African Development Bank accounted for about 73 percent of multilateral official development assistance to agriculture from 1974 to 2006 in sub-Saharan Africa. In addition, the New York-based UN Development Program is responsible for supporting the implementation of the UN Millennium Development Goals. In September 2009, the Group of 20 (G20) countries asked the World Bank to establish a multidonor trust fund to support the L’Aquila initiative to boost support for agriculture and food security. As of January 2010, the World Bank board approved the establishment of the Global Agriculture and Food Security Program Trust Fund, which the World Bank will administer. According to Treasury officials, the fund will be operational by the middle of 2010. Bilateral donors. Major bilateral donors include Australia, Canada, France, Germany, Japan, the Netherlands, the United Kingdom, and the United States, among others. At the G8 Summit in L’Aquila, Italy, in July 2009, and the subsequent G20 Summit in Pittsburgh, Pennsylvania, in September 2009, major donor countries and the European Commission pledged to significantly increase aid to agriculture and food security. According to the Organization of Economic Cooperation and Development, since the mid-1980s, aid to agriculture has fallen by half, but recent trends indicate a slowdown in the decline, and even the prospect of an upward trend. From 2002-2007, bilateral aid to agriculture increased at an average annual rate of 5 percent in real terms. Organization of Economic Cooperation and Development data show that in 2006-2007, development assistance countries’ bilateral aid commitments to agriculture amounted to $3.8 billion, a little more than half of the L’Aquila commitment on an annual basis. Other entities. Other entities such as NGOs, philanthropic foundations, private sector organizations, and academic and research organizations— often working in partnership—also play a significant role in supporting food security and agricultural development in developing countries. For example, the Alliance for a Green Revolution in Africa, which was established in 2006 with initial funding from the Bill and Melinda Gates Foundation and the Rockefeller Foundation, has entered into a partnership with the New Partnership for African Development to help link African government commitments to agricultural development with programs in seeds, soil health, market access, and policy. U.S. land-grant colleges and universities—institutions of higher education which receive federal support for integrated programs of agricultural teaching, research, and extension—sponsor fellowships for students from developing countries. Additionally, some of these colleges and universities may have partnerships with research organizations, such as the Consultative Group for International Agricultural Research, including the International Food Policy Research Institute, the International Institute for Tropical Agriculture, and the International Livestock Research Institute. While the U.S. government supports a broad array of programs and activities for global food security, it lacks comprehensive funding data on these programs and activities. We found that it is difficult to readily determine the full extent of such programs and activities and to estimate precisely the total amount of funding that the U.S. government as a whole allocates to global food security. In response to our data collection instrument, 7 of the 10 agencies reported providing monetary assistance for global food security based on the working definition we developed for this purpose with agency input. USAID, MCC, Treasury, USDA, State, USTDA, and DOD directed at least $5 billion in fiscal year 2008 to programs and activities that we define as addressing global food insecurity, with food aid accounting for about half of this funding. However, the actual total level of funding is likely greater. The agencies were unable to provide us with comprehensive funding data due to (1) a lack of a commonly accepted governmentwide operational definition of what constitutes global food security programs and activities as well as reporting requirements to routinely capture data on all relevant funds, and (2) weaknesses in some agencies’ management systems for tracking and reporting food security funding data comprehensively and consistently. Among agencies that support global food security programs and activities, USAID and USDA reported providing the broadest array of such programs and activities, while USAID and MCC reported providing the largest amount of funding in fiscal year 2008. To examine the types and funding levels of these programs and activities as comprehensively as possible, we sent a data collection instrument to the 10 agencies that participated in the 2008 Food Security Sub-PCC: DOD, MCC, OMB, the Peace Corps, State, Treasury, USAID, USDA, USTDA, and USTR. In this instrument, we asked the agencies to indicate what types of food security activities they performed in fiscal year 2008 and the funding levels associated with them. We had to develop a working definition of food security because there is no commonly accepted governmentwide operational definition that specifies the programs and activities that are food security-related. We developed our working definition based on a framework of food security-related activities that we established in a prior GAO report and a series of interactions with the relevant agencies over a period of several months. Our interactions with the agencies focused on refining the definition to ensure that it would be commonly understood and applicable to their programs and activities to the extent possible. The working definition that we developed included the following elements: food aid, nutrition, agricultural development, rural development, safety nets, policy reform, information and monitoring, and future challenges to food security. We asked the agencies to indicate which of these activities they performed and to provide funding data—when these data were available and reliable—on the appropriations, obligations, expenditures, and other allocations associated with these activities in fiscal year 2008. We pretested the instrument with officials at DOD, MCC, State, USAID, and USDA, and distributed it electronically in June and July 2009. All 10 agencies responded to our instrument and 7 of them (DOD, MCC, State, Treasury, USAID, USDA, and USTDA) reported funding data. In addition, the instrument gave the agencies the option to indicate whether they were involved in other types of food security assistance and if so, to describe them. Figure 3 summarizes the agencies’ responses on the types of global food security programs and activities and table 1 summarizes the funding levels. (The agencies are listed in order from highest to lowest amount of funding provided.) Our analysis of the agencies’ responses to the data collection instrument shows that USAID, MCC, Treasury (through its participation in multilateral development institutions), USDA, and State are the agencies providing the highest levels of funding to address food insecurity in developing countries. These agencies’ food security assistance, as reported in response to our instrument, can be summarized as follows: USAID. In addition to providing the bulk of U.S. foreign assistance targeting global food insecurity, USAID supports more types of programs and activities in this area than any other agency. The two types of USAID assistance with the highest funding are the delivery of food aid and the promotion of food security by stimulating rural economies through broad- based agricultural growth. According to USAID’s most recent International Food Assistance Report, the agency provided almost $2 billion for emergency food aid in fiscal year 2008. In addition, in response to our instrument, USAID reported about $500 million in funding for agricultural development and other global food security-related programs and activities in that year. USAID’s funding for agriculture would increase significantly under the administration’s fiscal year 2010 budget request to double U.S. assistance for global food security and agricultural development from the fiscal year 2009 request level. Millennium Challenge Corporation. MCC was established in 2004 and provides eligible developing countries with grants designed to support country-led solutions for reducing poverty through sustainable economic growth. MCC offers two kinds of monetary assistance: (1) compacts, which are large, multiyear grants to countries that meet MCC’s eligibility criteria in the areas of good governance, economic freedom, education, health, and natural resource management; and (2) threshold programs, which are smaller grants awarded to countries that come close to meeting these criteria and are committed to improving their policy performance. According to MCC, as of March 2009, it had obligated nearly $3.2 billion to strengthen the agricultural and rural economies in poor countries to promote reliable access to sufficient, safe, and affordable food. For fiscal year 2008, MCC reported funding obligations of about $912 million for multiyear compacts. Treasury. Treasury is the lead agency responsible for U.S. participation in the multilateral development banks. It provides funding for agricultural development through the leveraging of its contributions to the African Development Bank, Asian Development Bank, Inter-American Development Bank and Fund for Special Operations, European Bank for Reconstruction and Development, International Fund for Agricultural Development (IFAD), and World Bank. A representative from Treasury’s Office of International Affairs serves in a leadership role as a member of IFAD’s Board of Directors. Treasury reported that in fiscal year 2008 the total financing for public and private sector investments in agricultural development, including rural development and policy reform, from the multilateral development banks was $4.9 billion. We estimate that the U.S. share of this financing is $817 million, including $358 million in highly concessional loans and grants to the world’s poorest countries and $459 million in loans to middle-income and creditworthy low-income developing countries. USDA. USDA provides nonemergency food aid, as well as technical and nutritional assistance focusing on agricultural development and vulnerable groups. USDA reported $540 million in food security-related funding in fiscal year 2008, including $530.5 million dedicated to food aid programs— namely, Food for Progress and the McGovern-Dole International Food for Education and Child Nutrition Program—and the emergency food commodity reserve known as the Bill Emerson Humanitarian Trust. The remaining amount is used for various technical assistance programs, such as the Cochran and Borlaug fellowships supporting international exchanges to facilitate agricultural development. State. State’s primary role with regard to food security is to coordinate international communication, negotiations, and U.S. government policy formulation. The President has asked the Secretary of State to lead the Global Hunger and Food Security Initiative. A number of State’s bureaus and offices perform duties specific to their expertise that help promote global food security. For example, State’s Bureau of Economic, Energy, and Business Affairs, with assistance from the Office of Policy Planning and others, is involved in the effort to develop a whole-of-government strategy to promote global food security. The Bureau’s Office of Multilateral Trade and Agriculture Affairs assists with food security policy coordination, works toward a successful conclusion of the Doha Round of trade talks in the World Trade Organization, and promotes the removal of export restrictions on agricultural products and the reduction in trade barriers to agricultural biotechnology. The Bureau of International Organizations coordinates U.S. policy towards and participation in FAO and the World Food Program. The Bureau for Population, Refugees, and Migration coordinates with the World Food Program and USAID regarding food assistance and food security for refugees and other populations of concern. The Bureau of Oceans, Environment, and Science works bilaterally and multilaterally to advance U.S. foreign policy objectives in such areas as the sustainable use of natural resources, protection of biodiversity and wildlife, adaptation to climate change, harnessing of science and technology, and improvements to human health. State’s Office of the Director of U.S. Foreign Assistance (State/F) coordinates State and USAID budgets, while the Office of Conflict Prevention acts as the secretariat for the funding of reconstruction and stabilization projects through the use of DOD Section 1207 funds. State reported providing about $168 million for food security programs and activities in fiscal year 2008. The other five agencies that responded to our data collection instrument are involved in supporting global food security initiatives in different ways. USTDA and DOD provide some food security-related monetary assistance. For fiscal year 2008, USTDA reported providing more than $9 million for agriculture, rural development, and other types of food security assistance, and DOD’s Defense Security Cooperation Agency (DSCA) reported more than $8 million in funding for global food security-related activities that were part of disaster relief and humanitarian assistance efforts. The Peace Corps estimates that many of its volunteers serving in developing countries address the issues of hunger, malnutrition, and food insecurity, but did not report any funding data. While USTR does not support any food security programming, it is engaged in interagency consultations and has recently created an interagency subcommittee at the Trade Policy Staff Committee to coordinate trade policy elements of the administration’s global food security initiative. The 10th agency, OMB, participates in the interagency process as part of its mission to help formulate the administration’s budget and to advise the White House and other components of the Executive Office of the President on the resources available to support the development of new food security initiatives. (For a more extensive description of the 10 agencies’ food security-related programs and activities, see app. III.) Comprehensive data on the total amount of funding dedicated to global food security programs and activities by the whole of the U.S. government are not readily available. In response to our data collection instrument, the agencies providing monetary assistance for global food security reported directing at least $5 billion in fiscal year 2008 to programs and activities that we define as addressing global food insecurity, with food aid accounting for about half of this funding. However, the actual total level of funding is likely greater. We were only able to obtain these funding data and ascertain their reliability through repeated inquiries and discussions with the agencies over a 6-month period. The estimate does not account for all U.S. government funds targeting global hunger and food insecurity. The agencies did not provide us with comprehensive funding data because they lack (1) a commonly accepted governmentwide operational definition of global food security programs and activities as well as reporting requirements to routinely capture data on all relevant funds, and (2) data management systems to track and report food security funding comprehensively and consistently. For example, the estimate does not include funding for some of USAID’s food security-related activities, some U.S. contributions to international food security organizations, or funding for relevant programs of agencies that did not participate in the Food Security Sub-PCC, and were, therefore, outside the scope of our audit, such as nutritional assistance implemented as part of the President’s Emergency Plan for AIDS Relief. In addition, the agencies used different measures, such as planned appropriations, obligations, expenditures, and, in Treasury’s case, U.S. contributions to multilateral development banks, which made it difficult to arrive at a precise estimate. The agencies reported incomplete funding data due to a lack of a commonly accepted governmentwide operational definition of what constitutes global food security programs and activities as well as a lack of reporting requirements to routinely capture data on all relevant funds. An operational definition accepted by all U.S. agencies would enable them to apply it at the program level for planning and budgeting purposes. Because food security is an issue that cuts across multiple sectors, it can be difficult to define precisely what constitutes a food security-related program or activity, or to distinguish a food security activity from other development activities. Principal planning documents, even at the agencies with the highest levels of funding, have not recognized food security as a distinct program area. For example, as State noted in a written response to our data collection instrument, State’s and USAID’s Strategic Plan for Fiscal Years 2007 to 2012, the most recent guidance that sets these agencies’ priorities, does not use the term “food security.” We also found that the Foreign Assistance Coordination and Tracking System (FACTS) database, which State and USAID use to collect and report data on the U.S. foreign assistance that they implement, provides limited guidance for identifying food security programs and activities. The organization of the FACTS database reflects the four levels of the standardized program structure of U.S. foreign assistance: objectives, program areas, elements, and subelements. USAID could identify subelements whose definitions included food security activities. After extensive discussions with USAID, we selected 13 subelements as primarily containing food security programs and activities and added up funding levels associated with these subelements to estimate USAID’s global food security assistance in fiscal year 2008. However, if subelements contained both food security and non-food security activities, USAID could not always isolate the former from the latter. We identified about $850 million in funding for 12 such subelements. For example, the subelement for livelihood support, infrastructure rehabilitation, and services, with $123 million in funding in fiscal year 2008, combines food aid activities, such as food for work, with other activities, such as education and income generation, but FACTS is currently not designed to readily identify what portion of the $123 million is related to global food security. The lack of a commonly accepted governmentwide operational definition may also lead the agencies to either define food security very broadly or to not recognize food security-related activities as such. For example, in response to our instrument, USDA reported some of the activities supported by USDA’s Forest Service—such as the migratory bird and monarch butterfly habitat management—but did not explain how they were related to global food security. Conversely, DOD did not initially report any global food security-related programs and activities because food security is not recognized as part of DOD’s officially defined mission. However, in subsequent inquiries we established that some of DOD’s humanitarian assistance projects, such as those implemented by DSCA, have food security components. DOD officials acknowledged that the Combatant Commanders’ Initiative Fund and the Commanders’ Emergency Response Program likely support food security-related projects but did not provide us with relevant data. DOD’s involvement could be significant—for example, the Center for Global Development estimates that in 2007 DOD implemented 16.5 percent of U.S. development assistance—and DSCA’s $8.4 million for global food security-related projects likely represents only a portion of DOD’s total spending on food security-related activities. Additionally, some agencies that support food security activities lack reporting requirements to routinely capture data on all relevant funds. For example, although the Peace Corps has adopted a Food Security Strategic Plan and estimates that about 40 percent of its volunteers contribute in some capacity to food security work through projects in agriculture, health, and environment, the agency did not report any funding information. In an interview, senior Peace Corps officials noted that, given the circumstances under which Peace Corps volunteers work and live, it is impossible to isolate what portion of volunteers’ time is spent on food security. Furthermore, according to these officials, the Peace Corps does not track what percentage of the organization’s budget is spent on supporting volunteers’ food security-related work. We found that some agencies’ data management systems are inadequate for tracking and reporting food security funding comprehensively and consistently. Most notably, USAID and State/F—which both use FACTS— failed to include a very large amount of food aid funding data in the FACTS database. In its initial response to our instrument, USAID, using FACTS, reported that in fiscal year 2008 the agency’s planned appropriations for global food security included about $860 million for Food for Peace Title II emergency food aid. However, we noticed a very large discrepancy between the FACTS-generated $860 million and two other sources of information on emergency food aid funding: (1) the $1.7 billion that USAID allocated to emergency food aid from the congressional appropriations for Title II food aid for fiscal year 2008, and (2) about $2 billion in emergency food aid funding reported by USAID in its International Food Assistance Report for fiscal year 2008. Officials at USAID and State/F were unaware of the discrepancy until we brought it to their attention. As of February 12, 2010, USAID had not updated FACTS to incorporate the missing information. In formal comments on a draft of this report, USAID and State officials attributed this discrepancy to the fact that Title II food aid supplemental appropriations had not been entered into FACTS because these were made fairly late in fiscal year 2008. USAID officials reported that the agency has checks in place to ensure the accuracy of the regular appropriations data entered by its overseas missions and most headquarters bureaus. However, the omission of the supplemental appropriation information for emergency food aid, which is USAID’s food security program with the highest level of funding, raises questions about the data management and verification procedures in FACTS, particularly with regard to the Food for Peace program, and seriously limits its capacity to track all food security funding. In another example, in its initial response to our instrument, USDA provided us with conflicting data for the total amount of funding for its food security programs. In addition, the funding information USDA reported to us for the Food for Progress program differed from what was reported in the International Food Assistance Report for fiscal year 2008. USDA acknowledged and reconciled the conflicting data after repeated inquiries from us. The implications of these data weaknesses will be discussed in the context of the development of a governmentwide global food security strategy in the next section of this report. Consistent with our 2008 recommendation, the current administration has taken a number of steps toward developing a U.S. governmentwide strategy for global food security, including improving interagency coordination at the headquarters level in Washington, D.C.; finalizing the main elements of the strategy; and identifying potential countries for assistance. Two interagency processes established in April 2009—the National Security Council (NSC) Interagency Policy Committee (IPC) on Agriculture and Food Security and the Global Hunger and Food Security (GHFSI) working team—are improving coordination among numerous agencies, particularly at headquarters. The strategy under development is embodied in the GHFSI Consultation Document that State issued in September 2009, which is being expanded and is expected to be released shortly, along with an implementation document and a results framework that will include a plan for monitoring and evaluation. The administration has identified a group of 20 countries around which to center GHFSI assistance in fiscal year 2011, including 12 countries in sub-Saharan Africa, 4 in Asia, and 4 in the Western Hemisphere. However, the administration’s efforts are vulnerable to weaknesses in funding data as well as risks associated with the country-led approach. Currently, no single information database compiles comprehensive data on the entire range of global food security programs and activities across the U.S. government. The lack of comprehensive data on current programs and funding levels may impair the success of the new strategy because it deprives decision makers of information on all available resources, actual costs, and a firm baseline against which to plan. In addition, although the host country-led approach—a central feature of the forthcoming strategy—is promising, it is vulnerable to some risks. These include (1) the weak capacity of host governments; (2) limitations in the U.S. government’s own capacity to provide needed assistance to strengthen host governments’ capacity, as well as review host governments’ efforts and guide in-country activities, due to a shortage of expertise in agriculture and food security; and (3) difficulties of aligning donor assistance with host governments’ own strategies. Since 2009, to facilitate the development of a governmentwide global food security strategy, the administration has been taking steps to enhance coordination among the relevant entities and to ensure communication between policymakers and program implementers, particularly at the headquarters level in Washington, D.C. Two interagency coordination mechanisms are currently under way. These interagency coordination mechanisms, established in April 2009, are (1) the NSC/IPC on Agriculture and Food Security and (2) the State-led GHFSI working team, which have identified cross-cutting priorities and key areas of potential investment. (See figure 4.) The IPC, which provides the opportunity for agencies to coordinate and integrate strategies, is led by the NSC’s Special Assistant to President and Senior Director for Development, Democracy, and Stabilization. Ten agencies participated in the IPC when it was initially established: USAID, MCC, Treasury, USDA, State, DOD, Peace Corps, USTDA, USTR, and OMB. These agencies previously participated in the Food Security Sub- PCC, which was created in May 2008 and dissolved in January 2009. Other agencies have since joined the IPC, including the Departments of Commerce and Labor, the Export-Import Bank of the United States, the Overseas Private Investment Corporation, and the National Oceanic and Atmospheric Administration. The GHFSI working team is developing the governmentwide strategy and coordinating the implementation of the initiative. The primary agencies participating in the GHFSI working team are State, USAID, USDA, MCC, Treasury, and USTR. The Secretary of State’s Chief of Staff leads the GHFSI effort and has been convening weekly meetings with relevant agency officials since April 2009 in support of this effort. In addition, several agencies at headquarters, such as USAID and USDA, have established teams comprised of staff from different entities within the agency to coordinate their food security activities. USDA has recently named a coordinator for the global food security initiative in the Office of the Secretary of Agriculture. Furthermore, the administration is considering appointing a high-level U.S. food security coordinator to help clarify roles and responsibilities and facilitate improved coordination among the multiple agencies. Finally, a number of U.S. missions— including several in countries we visited during fieldwork—are organizing an interagency task force or working group to help coordinate efforts at the mission level, and some missions are considering designating a country coordinator position for GHFSI activities. In Bangladesh, for example, an active interagency food security task force meets at least biweekly and includes staff from USAID, State, and USDA, according to the USAID Mission Director, and the post is considering creating a GHFSI country coordinator position to coordinate the initiative’s activities in- country. Similarly, in Ethiopia, the USAID Mission Global Food Security Response Team was expanded to include DOD, the Peace Corps, State, various USAID units, and USDA, and the post is considering adding an initiative facilitator. Concurrent with these efforts, the administration continues to define the organizational structure within the executive branch to effectively manage U.S. support for the development and implementation of host country-led plans, links to regional activities, and GHFSI leadership and oversight. Since April 2009, consistent with our recommendation in a 2008 report, the administration has taken a number of steps to develop the elements of a U.S. governmentwide strategy to reduce global food insecurity— including an implementation document and a results framework—and is moving forward with selection of countries where GHFSI assistance will be focused. The administration’s actions reflect the President’s commitment, made in January 2009, to make the alleviation of hunger worldwide a top priority of this administration. In remarks to participants at a UN High-level Meeting on Food Security for All in Madrid, Spain, later that month, the Secretary of State reaffirmed the administration’s commitment to build a new partnership among donors, host governments in developing countries, UN agencies, NGOs, the private sector, and others to better coordinate policies to achieve the UN Millennium Development Goals adopted in 2000. However, as U.S. agencies working on the strategy recognize, translating these intentions into well-coordinated and integrated action to address global food insecurity is a difficult task, given the magnitude and complexity of the problem, the multitude of stakeholders involved, and long-standing problems in areas such as coordination, resources, and in-country capacity. The strategy is expected to be released shortly, according to senior U.S. officials. In September 2009, State and the GHFSI working team issued an initial draft of the strategy, known as the Consultation Document. The Consultation Document delineates a proposed approach to food security based on five principles for advancing global food security, as follows: 1. Comprehensively address the underlying causes of hunger and undernutrition. 2. Invest in country-led plans. 3. Strengthen strategic coordination. 4. Leverage the benefits of multilateral mechanisms to expand impacts. 5. Deliver on sustained and accountable commitments. These principles reflect the approach endorsed in several recent multilateral venues, including the G8 L’Aquila joint statement, the UN Comprehensive Framework for Action, and the World Summit on Food Security declaration. To develop the Consultation Document, the administration engaged in a consultative process within the U.S. government and with the global community and other stakeholders through the NSC/IPC and the State-led GHFSI. The Consultation Document was posted on State’s Web site for input from a broad range of relevant entities. According to State, to date, the document has also been shared with more than 130 entities for input, including multilateral donors, NGOs, universities, philanthropic foundations, and private sector entities. Based on the input provided, the GHFSI working team is expanding the initial Consultation Document and expects to release it to the public shortly. Furthermore, the GHFSI working team is developing an implementation document and a results framework for this initiative under development. According to the GHFSI working team, the effort to develop an implementation document has involved intensive interagency consultations and meetings with donors, such as FAO, the World Bank, and the United Kingdom’s Department for International Development, to discuss implementation “best practices,” the establishment of common global guidance on the development process, and reviews of country-led investment plans. Additionally, a number of U.S. missions overseas have submitted draft implementation plans for fiscal year 2010 that include staffing and budget resources required to achieve planned objectives in core investment areas. Absent a finalized governmentwide strategy, however, it is difficult to evaluate the subordinate implementation plans that field missions are submitting to ensure sufficient resource and funding levels. The GHFSI working team is also developing a whole-of- government results framework, which articulates specific objectives of the initiative as well as causal linkages between certain objectives, their intended results, and contribution to the overall goal. The results framework will be accompanied by a monitoring and evaluation plan, which identifies indicators to be used to report progress against planned outputs and outcomes. The framework has been externally reviewed by 10 experts, is now under review by U.S. government representatives in the field, and will be made available for public comment shortly, according to State and other members of the GHFSI working team. The administration is moving forward with plans to select about 20 countries where GHFSI assistance efforts are concentrated. State’s Fiscal Year 2011 Congressional Budget Justification (CBJ) for the GHFSI identified 12 countries in sub-Saharan Africa, 4 countries in Asia, and 4 countries in the Western Hemisphere on the basis of four criteria, as follows: 1. Prevalence of chronic hunger and poverty in rural communities. 2. Potential for rapid and sustainable agricultural-led growth. 3. Host government commitment, leadership, governance, and political will. 4. Opportunities for regional synergies through trade and other mechanisms. According to the Consultation Document, the GHFSI focus countries will fall into two general categories: countries in the first phase that would benefit from technical assistance and capacity building to fully develop investment plans, and countries in the second phase with advanced national food security plans and already-established public and private capacities to enable successful plan implementation. Phase I countries will receive targeted assistance to generate a comprehensive national food security investment plan, including assistance to increase technical expertise, improve natural resource management, prepare inventories and assessments of the agricultural sector, conduct reform of trade and agricultural policies, and meet critical infrastructure needs. Phase II countries will be considered for significant resources and have to demonstrate sufficient capacity, have an enabling environment for sustainable agricultural-led growth, and have a completed country plan. According to State’s Fiscal Year 2011 CBJ for GHFSI, the administration will develop a set of objective indicators that measure both the progress toward reforms that a country has committed to in its internal consultative processes and a minimum set of internationally recognized cross-country policy indicators. As of February 2010, GHFSI has identified 15 Phase I countries (7 in sub-Saharan Africa, 4 in Asia, 4 in the Western Hemisphere) and 5 Phase II plan countries (all in sub-Saharan Africa) that are being considered for assistance in fiscal year 2011. (See table 2.) GHFSI proposed budgets for Phase I countries range from $11.56 million to $36.75 million for a total of $352 million in fiscal year 2011. For Phase II countries, the proposed budgets range from $42 million to $63 million for a total of $246 million in fiscal year 2011. Comprehensive data on the entire range of global food security programs and activities across the U.S. government are not collected in a single information database. As we discussed earlier in this report, the agencies we surveyed do not routinely collect and report such information using comparable measures. As a result, it is extremely difficult to capture the full extent of the U.S. government’s ongoing efforts to promote global food security as well as the sources and levels of funding supporting these efforts. Current planning does not take into account comprehensive data on existing programs and funding levels, officials reported, but relies instead on budget projections for the programs considered in the strategy. However, the lack of such data deprives decision makers of information on all available resources, actual costs, and a firm baseline against which to plan. Such information would be critical for the development of a well- informed and well-planned governmentwide strategy. FACTS, which is currently used only by two agencies, is an information system with the potential to collect and report comprehensive data using comparable measures across the U.S. government on a range of issues, including food security, but it has serious limitations in implementation. FACTS was initially designed to be a comprehensive repository of program and funding data on the U.S. foreign assistance, and State expected the system to eventually include data from the more than 25 other U.S. entities involved in providing foreign assistance, including MCC and Treasury. However, it is currently used only by State and USAID to collect, track, and report standardized data for all foreign assistance that they implement. Expanding the use of FACTS to other agencies has proven to be difficult, in part because agencies use different data management systems and procedures to allocate resources and measure results. Even different units within an agency may use different data management systems. In addition, as USAID officials in Ethiopia told us, information sharing may have been hindered by a perception among officials from at least one agency providing U.S. foreign assistance that supporting the coordination effort through the State/F process created an additional layer of work that was not regarded as a priority by other agencies. As we discuss earlier in this report, FACTS currently has limited capacity to track data for global food security programs and activities. We highlight FACTS because, despite its limitations, it was originally designed to compile and report comprehensive and comparable funding data on assistance programs implemented by multiple agencies of the U.S. government, and State/F and USAID could address the limitations we note by changing their operating procedures rather than by redesigning the system itself. The administration has embraced the host country-led approach as central to the success of the new strategy, reflecting a consensus among policymakers and experts that development efforts will not succeed without host country ownership of donor interventions. At the same time, as our current and prior work shows, the host country-led approach, although promising, is vulnerable to a number of risks. These include (1) the weak capacity of host governments, which can limit their ability to absorb increased donor funding and sustain these levels of assistance; (2) a shortage of expertise in agriculture and food security at relevant U.S. agencies that could constrain efforts to help strengthen host governments’ capacity as well as review host governments’ efforts and guide in-country activities; and (3) difficulties in aligning donor assistance, including that of the United States, with host governments’ own strategies. Weak Capacity of Host Governments Can Limit Sustainability of Donor Assistance The weak capacity of host governments—a systemic problem in many developing countries, particularly in sub-Saharan Africa—could limit their ability to (1) meet their own funding commitments for agriculture, (2) absorb significant increases in donor funding for agriculture and food security, and (3) sustain these donor-funded projects over time. In addition, host governments often lack sufficient local staff with the technical skills and expertise required to implement donor-initiated agriculture and food security projects. First, while donors are poised to substantially increase funding for agriculture and food security, many African countries have yet to meet their own pledges to increase government spending for agriculture. At the G8 and G20 summits in 2009, major donors pledged to direct more than $22 billion for agriculture and food security to developing countries between 2010 and 2012. In 2003 African countries adopted the Comprehensive Africa Agriculture Development Program (CAADP) and pledged to commit 10 percent of government spending to agriculture by 2008. However, in December 2009, the International Food Policy Research Institute (IFPRI) reported that only 8 out of 38 countries had met this pledge as of 2007, namely Burkina Faso, Ethiopia, Ghana, Guinea, Malawi, Mali, Niger, and Senegal (see fig. 5). Despite stakeholders’ endorsement of progress Rwanda has made toward addressing agriculture and food security at the first CAADP post-compact high-level stakeholder meeting in December 2009, an IFPRI review raised some concerns about growth performance in Rwanda’s agricultural sector, which is nearly 50 percent below long-term targets. IFPRI found that (1) Rwanda’s aggregate agricultural growth is higher than the precompact level and the CAADP goal of 6 percent but lower than is necessary to meet the poverty MDG, and (2) even successfully implemented investment plans that achieve their targets for individual sectors would only meet the required growth objectives to realize the poverty MDG by 2020, but not by 2015. Second, the weak capacity of host governments raises questions about their ability to absorb significant increases in donor funding for agriculture and food security. According to MCC, as of the end of the first quarter of fiscal year 2009, it had disbursed approximately $438 million in compact assistance. Prior GAO analysis shows that this constitutes 32 percent of initially planned disbursements for the 16 compacts that had entered into force. The 16 compacts have a total value of approximately $5.7 billion. According to a senior technical financial advisor to the government of Ghana, a number of donor-funded projects have often not been able to spend their full funding and delays in project implementation are not uncommon. For example, as shown in figure 6, MCC’s $547 million compact with Ghana, which w in August 2006 and entered into force February 2007, had contract commitments totaling $340 million but had disbursed only about $123 milli as of December 2009, more than halfway through the 5-year compact that ends in January 2012. In the face of growing malnutrition worldwide, the international community has established ambitious goals toward halving global hunger, including significant financial commitments to increase aid for agriculture and food security. Given the size of the problem and how difficult it has historically been to address it, this effort will require a long-term, sustained commitment on the part of the international donor community, including the United States. As part of this initiative, and consistent with a prior GAO recommendation, the United States has committed to harnessing the efforts of all relevant U.S. agencies in a coordinated governmentwide approach. The administration has made important progress toward realizing this commitment, including providing high-level support across multiple government agencies. However, the administration’s efforts to develop an integrated U.S. governmentwide strategy for global food security have two key vulnerabilities: (1) the lack of readily available comprehensive data across agencies and (2) the risks associated with the host country-led approach. Given the complexity and long-standing nature of these concerns, there should be no expectation of quick and easy solutions. Only long-term, sustained efforts by all relevant entities to mitigate these concerns will greatly enhance the prospects of fulfilling the international commitment to halve global hunger. To enhance U.S. efforts to address global food insecurity, we recommend that the Secretary of State take the following two actions: 1. work with the existing NSC/IPC to develop an operational definition of food security that is accepted by all U.S. agencies; establish a methodology for consistently reporting comprehensive data across agencies; and periodically inventory the food security-related programs and associated funding for each of these agencies; and 2. work in collaboration with the USAID Administrator, the Secretary of Agriculture, the Chief Executive Officer of the Millennium Challenge Corporation, the Secretary of the Treasury, and other agency heads, as appropriate, to delineate measures to mitigate the risks associated with the host country-led approach on the successful implementation of the forthcoming governmentwide global food security strategy. We provided a draft of this report to the NSC and the 10 agencies that we surveyed. Four of these agencies—State, Treasury, USAID, and USDA— provided written agency comments and generally concurred with our recommendations. In addition, they provided updated information and clarifications concerning data issues and the host country-led approach. We have reprinted these agencies’ comments in appendixes V, VI, VII, and VIII, respectively, along with our responses. Both State and USAID agreed that implementing the first recommendation—to develop an operational definition of food security that is accepted by all U.S. agencies—would be useful, although State expressed some concern regarding the costs of doing so. However, the limitations we found in FACTS could be addressed by improving operating procedures and therefore need not be costly. Moreover, technical comments from OMB suggest that its budget database may be able to address our recommendation to establish a methodology for consistently reporting comprehensive data across agencies and periodically inventory agencies’ food security-related programs and funding. State’s and USAID’s comments confirm our finding that the FACTS data were incomplete and did not reflect all food security funding as FACTS lacks complete data for supplemental appropriations. This is a serious limitation given the size of these appropriations—$850 million in fiscal year 2008—for Food for Peace Title II emergency food aid, which is USAID’s global food security program with the highest level of funding. In addition, USDA noted that the recommendation gives State the lead role, despite acknowledging that USAID and USDA offer the broadest array of food security programs and activities. The report recognizes the important roles that all the relevant agencies play in the Global Hunger and Food Security Initiative (GHFSI) currently led by State as a whole-of-government effort. The recommendation is also intended to recognize the expertise that various agencies can contribute toward the effort and encourage fully leveraging their expertise. Regarding the second recommendation, the four agencies all noted that the administration recognizes the risks associated with a country-led approach and are taking actions to mitigate these risks. State indicated that the implementation strategy for GHFSI will incorporate mechanisms to manage these risks. Treasury noted that the interagency working group is proposing to increase the amount of technical assistance to recipient countries and that a new multidonor trust fund administered by the World Bank will complement U.S bilateral food security activities by leveraging the financial resources of other donors and utilizing the technical capacity of multilateral development banks. USAID noted that the administration is planning to implement support to host governments in two phases in order to reduce the risks associated with limited country capacity and potential policy conflicts. USDA pointed out the technical expertise that the department can offer, including its relationships with U.S. land grant colleges and universities and international science and technology fellowship programs to help build institutional and scientific capacity. In addition, DOD, MCC, NSC, OMB, State, Treasury, USAID, USDA, and USTDA provided technical comments on a draft of this report, which we have addressed or incorporated as appropriate. The Peace Corps and USTR did not provide comments. We are sending copies of this report to interested members of Congress; the Special Assistant to the President and Senior Director for Development, Democracy, and Stabilization; the Secretary of State; and the Administrator of USAID as co-chairs of the NSC/IPC on Agriculture and Food Security; and relevant agency heads. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-9601 or melitot@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. We examined (1) the types and funding levels of food security programs and activities of relevant U.S. government agencies and (2) progress in developing an integrated U.S. governmentwide strategy to address global food insecurity, as well as potential vulnerabilities of that strategy. To examine the types and funding levels of food security programs and activities of relevant U.S. government agencies, we administered a data collection instrument to the 10 U.S. agencies that are engaged in food security activities and participated in the Food Security Sub-Policy Coordinating Committee on Food Price Increases and Global Food Security (Food Security Sub-PCC). These agencies included the U.S. Agency for International Development (USAID), Millennium Challenge Corporation (MCC), Department of the Treasury (Treasury), U.S. Department of Agriculture (USDA), Department of State (State), Department of Defense (DOD), U.S. Trade and Development Agency (USTDA), the Peace Corps, Office of the U.S. Trade Representative, and Office of Management and Budget. We had to develop a working definition of food security because there is no commonly accepted governmentwide operational definition that specifies the programs and activities that are food-security related. We developed our working definition based on a framework of food security-related activities that we established in prior work on international food assistance, including our 2008 report, and a series of interactions with the relevant agencies over a period of several months. Our interactions with the agencies focused on refining the definition to ensure that it would be commonly understood and applicable to their programs and activities to the extent possible. The working definition that we developed included the following elements: food aid, nutrition, agricultural development, rural development, safety nets, policy reform, information and monitoring, and future challenges to food security. We asked the agencies to indicate which of these activities they performed and to provide funding data—when these data were available and reliable—on the appropriations, obligations, expenditures, and other allocations associated with these activities in fiscal year 2008. We pretested the instrument with officials at DOD, MCC, State, USAID, and USDA, and distributed it electronically in June and July 2009. All 10 agencies responded to our instrument and 7 of them (DOD, MCC, State, Treasury, USAID, USDA, and USTDA) reported funding data. We conducted extensive follow-up with the agencies to determine the completeness, accuracy, and reliability of the data provided. While the agencies provided us with data about their food security programs and activities, we noted limitations in terms of establishing a complete and consistent U.S. governmentwide total. Some agencies could not report funding information for all or some of their food security activities because their databases did not track those specific activities. In s cases, agencies could provide funding information for their major security programs, such as USDA’s Food for Progress and Food for Education programs administered by the Foreign Agricultural Service, bu were limited in their ability to provide this information for food se activities that spanned several units within agencies. The agencies th were able to report funding information did so using different measures: USAID reported data on planned appropriations (plans for implemen current-year appropriated budgets); State provided appropriations, , obligations, and expenditures data for different programs; and DOD, MCC USDA, and USTDA reported obligations data. Treasury’s funding figure is a GAO estimate based on Treasury data for (1) agricultural sector lending commitments made in fiscal year 2008 by multilateral development bank s, (2) the U.S. share of capital in the banks which lend to middle-income and creditworthy low-income countries, and/or (3) the U.S. share of total resources provided to the multilateral development bank concessional windows from donor contributions for the replenishment active in fiscal year 2008. In addition, the Treasury funding estimate distinguishes between support to the poorest countries and to middle-income and creditworthy low-income developing countries. As a result, the data reported by the agencies are not directly comparable. Where possible, we performed some cross-checks of the data we received in response to our instrument with data from published sources. During this review, we compared USAID’s planned appropriations for emergency food aid—about $860 million—subm itted in response to the instrument to (1) the $1.7 billion that USAID allocated to emergency food aid from the congressional appropriations for Food for Peace Title II food aid for fiscal year 2008; and (2) about $2 billion in emergency food aid funding reported in USAID’s International Food Assistance Report (IFAR) for fiscal year 2008, and found a very large discrepancy of between about $84 0 million to $1.1 billion. In this instance, we relied on the IFAR data instead of the data USAID reported using the Foreign Assistance Coordination and Trackin System (FACTS), because we determined that the IFAR data for emergency food aid were more reliable. Officials at USAID and State/F were unaware of this discrepancy until we brought it to their attention. formal comments on a draft of this report, State/F and USAID explainedIn that the discrepancy occurred because the funding data for the fisca 2008 supplemental appropriations for Food for Peace Title II emergencyfood aid had been entered into FACTS. Our own analysis confirmed this explanation. Based on discussions with USAID officials about their procedures for entering data into FACTS, we determined that, once we had made the correction for emergency food aid, the data we received were sufficiently reliable to indicate a minimum amount that USAID had directed to food security programs and activities. However, this amount did not include funding for USAID programs and activities that have a foo tion, security component, but also have other goals and purposes. In addi we determined that it likely did not include all supplemental appropriations for the agricultural and other programs and activities reported. Hence, the total actual level of funding is likely greater. Overall, based on our follow-up discussions with the agencies, we determined that their responses to the data collection instrument had covered their major food security programs, but that there were weaknesses in their reporting on other programs that addressed aspects o food security. We determined that the reported funding data were sufficiently reliable to indicate the relative size of the major agencies’ efforts in terms of approximate orders of magnitude, and included the funding information provided by the agencies—as amended during the course of our follow-up inquiries—in appendix III. However, due to the limitations in the funding data reported by the agencies, we could not make precise comparisons of the agencies’ funds for food security in fisc year 2008, nor could we provide a precise total. As a result, we presented rounded totals for funding in our discussion of our findings. To assess progress in developing an integrated governmentwide strategy to address global food insecurity—as well as potential vulnerabilities of that strategy—we reviewed selected reports, studies, and papers issued by U.S. agencies, multilateral organizations, research and nongovernmental organizations. In Washington, D.C., we interviewed officials from the National Security Council Interagency Policy Committee on Agriculture and Food Security to discuss the interagency process to develop a governmentwide food security strategy. We reviewed the initial Consultation Document that State issued in September 2009, which is regarded as the strategy under development. Similarly, we discussed the forthcoming U.S. global food security strategy with the officials in the agencies that are developing it, but were not able to fully consider the final draft for this review. At the time of our review, the Global Hunger and Food Security Initiative working team was in the process of finalizing the strategy, along with an implementation document and a results framework that will provide a foundation for country selection, funding, and mechanisms to monit evaluate the strategy. We conducted fieldwork in Bangladesh, Ethiopia, Ghana, Haiti, and Malawi. We selected these countries for fieldwork because the United States has multiple active programs addressing food insecurity there. The proportion of the chronically hungry in these countries—based on the Food and Agriculture Organization’s most recent estimates— 8 percent of the population in Ghana to 58 percent in Haiti. In addition , we also selected these countries to ensure geographic coverage of U.S . global efforts in Africa, Asia, and the Western Hemisphere. While this selection is not representative, it ensured that we had variation in the key facto considered. We did not generalize the results of our fieldwork beyond o ate of selection, and we used fieldwork examples to demonstrate the st food insecurity in the countries we visited and U.S. efforts to date. In the countries that we selected for fieldwork, we met with U.S. mission and host government, donor, and NGO representatives. We also visited numerous project sites, smallholder farmer groups, and distribution site funded by the U.S. government and other donors. In addition, we attend the 2009 World Food Summit as an observer and met with the Rome-bas UN food and agriculture agencies—namely, the Food and Agriculture Organization, World Food Program, and the International Fund for Agricultural Development, as well as the U.S. Mission to the United Nations and representatives of other donor countries such as Kingdom’s Department for International Development. We conducted this performance audit from February 2009 to March 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following tables summarize the responses of 10 U.S. agencies to our data collection instrument regarding their global food security programs and activities and associated funding levels in fiscal year 2008. The summaries are listed by agency in order from highest to lowest amount of funding reported. The totals in each summary table may not match the sum of individual rows due to rounding. Table 3 summarizes the U.S. Agency for International Development’s (USAID) funding for global food security in fiscal year 2008. USAID reported providing the broadest array of programs and activities and the largest amount of funding. Table 4 summar obligations for a izes the Millennium Challenge Corporation’s gricultural and rural development in fiscal year 2008. Table 5 presents GAO’s estimate of U.S. contributions made by the Department of the Treasury (Treasury) to multilateral development banks for agricultural development, rural development, and policy reform in fiscal year 2008. Table 6 summarizes the U.S. Department of Agriculture’s (USDA) funding obligations for global food security programs and activities in fiscal year 2008. Table 7 summarizes the Department of State’s (State) funding for global food security programs and activities in fiscal year 2008. Agency’s (USTDA) ding obligations for global food security-related pro iscal year 2008. Table 9 summarizes the Department of Defense’s (DOD) Defense Security Cooperation Agency’s funding obligations for disaster relief and humanitarian assistance with global food security components in fiscal year 2008. Table 10 summarizes the Peace Corps’ response to our data collection instrument. The Peace Corps did not report any funding data. Table 11 summarizes the U.S. Trade Representative’s (USTR) response our data collection instrument. USTR did not report any funding data. Table 12 summarizes the Office of Management and Budget’s (OMB) response to our data collection instrument. OMB stated that it is not an implementing agency for global food security activities, and as such does not have programs, activities, or funding to report. The following are GAO’s comments on the Department of State’s letter dated March 1, 2010. The following are GAO’s comments on the Department of the Treasury’s (Treasury) letter dated February 26, 2010. 1. Consistent with Treasury’s comments, the draft report recognized the difference between concessional windows and nonconcessional windows and noted the breakdown between funding to poor and middle-income countries. 2. The definitional issue is a challenge in estimating or determining the funding level for food security provided by the international financial institutions. Accordingly, we discussed this issue with Treasury and mutually agreed on the method to calculate U.S. contributions to multilateral development banks that address global food insecurity. We mutually agreed to use a percentage of the banks’ funding for agricultural development—which is key to food security—as a way to estimate food security funding. The percentage is based on U.S. contributions to the banks. 3. We do not question the appropriateness of the host country-led approach. However, we do point out the potential weaknesses of the approach as risks that the administration should mitigate to ensure successful implementation of the strategy. The following are GAO’s comments on the U.S. Agency for International Development’s (USAID) letter dated February 26, 2010. 1. The report recognizes the progress that U.S. agencies are making toward the development of the strategy, Feed the Future: The Global Hunger and Food Security Initiative Strategy. The implementation of our recommendations, including developing an operational definition of food security that is accepted by all U.S. agencies, will better help to ensure the successful implementation of the evolving strategy. 2. We compared the data in the Foreign Assistance Coordination and Tracking System (FACTS) to data in other sources that reported funding for food security, such as the annual International Food Assistance Report (IFAR), and several years of congressional budget justifications because that is a standard methodology for assessing data reliability. Our goal, as USAID officials were aware through months of discussion, was to collect the most complete and accurate data possible on food security funding. With that in mind, we requested data on supplemental appropriations and were given data tables that included some supplemental appropriations data. In addition, when we alerted USAID officials to the discrepancy we found in the Title II emergency food aid data, they advised us to use the complete funding data reported in IFAR rather than the incomplete data that were reported in FACTS. 3. USAID’S comments confirm our finding that the FACTS data were incomplete and did not reflect all food security funding. While FACTS contains reasonably complete and accurate data for regular food security-related appropriations, it lacks compete data for supplemental appropriations. This is a serious limitation inasmuch as USAID’s global food security program with the highest funding level received a supplemental appropriation of $850 million in fiscal year 2008. 4. The report acknowledges the roles of all development partners, including host governments, multilateral organizations, bilateral donors, and other entities such as nongovernmental organizations, philanthropic foundations, private sector organizations, and academic and research organizations—with whom U.S. agencies will have to coordinate their efforts. As with other donors, the United States is supporting the Comprehensive Africa Agriculture Development Program (CAADP) to help ensure a coordinated approach. However, we note in the report that the data suggest that the vast majority of African countries have not met their own commitments to direct 10 percent of government spending to agriculture. This calls into questio development many of these countries’ commitment to agricultural which, in turn, could impact the development of technically sound investment strategies for food security that reflect the reality of these countries’ capacity to implement their own strategies, with donor support and assistance. 5. While the two-phased approach in selecting countries for GHFSI ound development policy, we report that two of the five countries assistance may reduce the risks associated with limited host country s on capacity and potential significant conflicts with U.S. perspective s currently under consideration as Phase II countries—Rwanda and Tanzania—have not met their 10-percent CAADP pledges (see comment 4). In identifying and selecting Phase I and Phase II countries, the U.S. government should be clear on its application of th g criteria that the GHFSI strategy has delineated, which include, amon other things, host government commitment, leadership, and governance. 6. Consistent with USAID comments, the report acknowledges the recen steps that USAID has taken to rebuild its staff with technical expertise in agriculture and food security, which is necessary to enhance the agency’s efforts to help strengthen the capacity of host governments in these areas. The following are GAO’s comments on the U.S. Department of Agriculture’s (USDA) letter dated February 22, 2010. 1. We are making our second recommendation to the Secretary of State to work in collaboration with the U.S. Agency for International Development Administrator, the Secretary of Agriculture, the Chief Executive Officer of the Millennium Challenge Corporation, the Secretary of the Treasury, and other agency heads, as appropriate. We recognize the important roles that all the relevant agencies play in the Global Hunger and Food Security Initiative (GHFSI) currently led by State as a whole-of-government effort. We also recognize the expertise that agencies such as USDA and USAID offer, and encourage fully leveraging their expertise, which is essential to U.S. efforts to help strengthen host governments’ capacity in a country-led approach. USDA’s expertise includes its relationships with U.S. land grant colleges and university partners, as well as the science and technology programs that the department supports. 2. Consistent with USDA’s comments, the report acknowledges USDA’s limited in-country presence and tight travel budgets—issues that agricultural attachés raised during our fieldwork. The report also acknowledges steps that USDA is taking to increase its presence, especially in Africa, in light of the growing role of Africa in USDA’s food security and trade portfolios. 3. We do not question the appropriateness of the host country-led approach. However, we do point out the potential weaknesses of the approach as risks that the administration should mitigate to ensure successful implementation of the strategy. We note that the weak capacity of host governments is a systemic problem in many developing countries. 4. See comment 1. 5. See comment 3. 6. See comment 1. 7. See comment 2. 8. See comment 1. 9. See comment 2. 10. We added a footnote to provide USDA’s explanation for how the migratory bird and monarch butterfly habitat management were related to global food security. 11. Although our review focuses on U.S. efforts, consistent with US DA’s comments, the report also acknowledges the roles of all development partners, including host governments, multilateral organizations, bilateral donors, and other entities such as nongovernmental organizations, philanthropic foundations, private sector organizations, and academic and research organizations. International Food Assistance: A U.S. Governmentwide Strategy Could Accelerate Progress toward Global Food Security. GAO-10-212T. Washington, D.C.: October 29, 2009. International Food Assistance: Key Issues for Congressional Oversight. GAO-09-977SP. Washington, D.C.: September 30, 2009. International Food Assistance: USAID Is Taking Actions to Monitoring and Evaluation of Nonemergency Food Aid, but Weaknesses in Planning Could Impede Efforts. GAO-09-980. Washington, D.C.: September 28, 2009. International Food Assistance: Local and Regional Procurement Provides Opportunities to Enhance U.S. Food Aid, but Challenges May Constrain Its Implementation. GAO-09-757T. Washington, D.C.: June 4, 2009. International Food Assistance: Local and Regional Procurement Can Enhance the Efficiency of U.S. Food Aid, but Challenges May Constrain Its Implementation. GAO-09-570. Washington, D.C.: May 29, 2009. USAID Acquisition and Assistance: Challenges Remain in Developing and Implementing a Strategic Workforce Plan. GAO-09-607T. Washington, D.C.: April. 28, 2009. Foreign Aid Reform: Comprehensive Strategy, Interagency Coordination, and Operational Improvements Would Bolster Current Efforts. GAO-09-192. Washington, D.C.: April 2009. GAO, Foreign Assistance: State Department Foreign Aid Information Systems Have Improved Change Management Practices but Do Not Follow Risk Management Best Practices. GAO-09-52R. Washington, D.C.: November 2008. USAID Acquisition and Assistance: Actions Needed to Develop and Implement a Strategic Workforce Plan. GAO-08-1059. Washington, D.C.: September 26, 2008. International Food Security: Insufficient Efforts by Host Governments and Donors Threaten Progress to Halve Hunger in Sub-Saharan Africa by 2015. GAO-08-680. Washington, D.C.: May 29, 2008. Somalia: Several Challenges Limit U.S. International Stabilization, Humanitarian, and Development Efforts. GAO-08-351. Washingto February 19, 2008. Foreign Assistance: Various Challenges Limit the Efficiency and Effectiveness of U.S. Food Aid. GAO-07-905T. Washingt 2007. Foreign Assistance: Various Challenges Impede the Efficiency and Effectiveness of U.S. Food Aid. GAO-07-560. Washington, D.C.: April 2007. he Foreign Assistance: U.S. Agencies Face Challenges to Improving t Efficiency and Effectiveness of Food Aid. GAO-07-616T. Washington, D.C.: March 21, 2007. Intellectual Property: Strategy for Targeting Organized Piracy (STOP) : Requires Changes for Long-term Success. GAO-07-74. Washington, D.C. November 8, 2006. Darfur Crisis: Progress in Aid and Peace Monitoring Threatened by Ongoing Violence and Operational Challenges. GAO-07-9. Washington, D.C.: November 9, 2006. Rebuilding Iraq: More Comprehensive National Strategy Needed to Help Achieve U.S. Goals. GAO-06-788. Washington, D.C.: July 11, 2006. Results-Oriented Government: Practices That Can Sustain Collaboration among Federal Agencies. GAO-06-15. Washington, D.C.: October 21, 2005. Maritime Security Fleet: Many Factors Determine Impact of Potential Limits of Food Aid Shipments. GAO-04-1065. Washington, D.C.: September 13, 2004. United Nations: Observations on the Oil for Food Program and Iraq’s Food Security. GAO-04-880T. Washington, D.C.: June 16, 2004. Combating Terrorism: Evaluation of Selected Characteristics in National Strategies Related to Terrorism. GAO-04-408T. Washington, D.C.: February 3, 2004. Foreign Assistance: Lack of Strategic Focus and Obstacles to Agricultural Recovery Threaten Afghanistan’s Stability. GAO-03-607. Washington, D.C.: June 30, 2003. Foreign Assistance: Sustained Efforts Needed to Help Southern Afr Recover from Food Crisis. GAO-03-644. Washington, D.C.: June 25, 2003. Food Aid: Experience of U.S. Programs Suggest Opportunities for Improvement. GAO-02-801T. Washington, D.C.: June 4, 2002. Foreig Challenges for Successful Implementation. GAO-02-328. Washington, D.C.: February 28, 2002. n Assistance: Global Food for Education Initiative Faces Foreign Assistance: U Controls. GAO/NSIAD/AIMD-00-329. Washington, D.C.: September 29, 2000. .S. Food Aid Program to Russia Had Weak Internal Foreign Assistance Mixed Results. GAO/NSIAD-00-175. Washington, D.C.: June 15, 2000. : U.S. Bilateral Food Assistance to North Korea Had Managing for Results: Barriers to Interagency Coordination. GAO/GGD-00-106. Washington, D.C.: March 29, 2000. Foreign Assistance: Donation of U.S. Planting Seed to Russia in 1999 Had Weaknesses. GAO/NSIAD-00-91. Washington, D.C.: March 9, 2000. Foreign Assistance: North Korea Restricts Food Aid Monitoring. GAO/NSIAD-00-35. Washington, D.C.: October 8, 1999. Food Security: Factors That Could Affect Progress toward Meeting World 9. Food Summit Goals. GAO/NSIAD-99-15. Washington, D.C.: March 22, 199 Food Security: Prepa GAO/NSIAD-97-44. Washington, D.C.: November 7, 1996. rations for the 1996 World Food Summit.
Global hunger continues to worsen despite world leaders' 1996 pledge--reaffirmed in 2000 and 2009--to halve hunger by 2015. To reverse this trend, in 2009 major donor countries pledged $22 billion in a 3-year commitment to agriculture and food security in developing countries, of which $3.5 billion is the U.S. share. Through analysis of agency documents, interviews with agency officials and their development partners, and fieldwork in five recipient countries, GAO examined (1) the types and funding of food security programs and activities of relevant U.S. government agencies; and (2) progress in developing an integrated U.S. governmentwide strategy to address global food insecurity as well as potential vulnerabilities of that strategy. The U.S. government supports a wide variety of programs and activities for global food security, but lacks readily available comprehensive data on funding. In response to GAO's data collection instrument to 10 agencies, 7 agencies reported funding for global food security in fiscal year 2008 based on the working definition GAO developed for this purpose with agency input. USAID and USDA reported the broadest array of programs and activities, while USAID, the Millennium Challenge Corporation, Treasury, USDA, and State reported providing the highest levels of funding for food security. The 7 agencies together directed at least $5 billion in fiscal year 2008 to global food security, with food aid accounting for about half of that funding. However, the actual total level of funding is likely greater. GAO's estimate does not account for all U.S. government funds targeting global food insecurity because the agencies lack (1) a commonly accepted governmentwide operational definition of global food security programs and activities as well as reporting requirements to routinely capture data on all relevant funds; and (2) data management systems to track and report food security funding comprehensively and consistently. The administration is making progress toward finalizing a governmentwide global food security strategy--expected to be released shortly--but its efforts are vulnerable to data weaknesses and risks associated with the strategy's host country-led approach. The administration has established interagency coordination mechanisms at headquarters in Washington, D.C., and is finalizing an implementation document and a results framework. However, the lack of readily available comprehensive data on current programs and funding levels may deprive decision makers of information on available resources and a firm baseline against which to plan. Furthermore, the host country-led approach, although promising, is vulnerable to (1) the weak capacity of host governments, which can limit their ability to sustain donor-funded efforts; (2) a shortage of expertise in agriculture and food security at U.S. agencies that could constrain efforts to help strengthen host government capacity; and (3) policy differences between host governments and donors, including the United States, which may complicate efforts to align donor assistance with host government strategies.
Ultra-filtration technology separates the components of milk according to their size by passing milk under pressure through a thin porous membrane. Specifically, ultra filtration allows the smaller lactose, water, mineral, and vitamin molecules to pass through the membrane, while the larger protein and fat molecules—key components for making cheese— are retained and concentrated. (See app. II for further explanation of ultra filtration and its use in the cheese-making process.) Although ultra- filtration equipment is expensive, it creates an ingredient well suited for making cheese and other food products requiring a high milk protein content. In addition, the removal of water and lactose reduces the volume of milk, and thereby lowers its transportation and storage costs. All ultra- filtered milk imported into the United States in 2000 was in a dry powder form. The U.S. Customs Service’s milk protein concentrates classification includes processed milk products containing between 40 percent and 90 percent protein. Imported powdered milk products with less than 40 percent protein are usually classified as nonfat dry milk and are subject to a tariff-rate quota that limits the amount that can be imported at a low tariff rate. In addition to ultra-filtered milk products, the milk protein concentrate classification includes concentrates made through other processes, such as blending nonfat dry milk with highly concentrated proteins. These products are often tailored to a specific use in products requiring a protein ingredient. FDA’s standards of identity regulations permit cheese manufacturers under the "alternate make" provisions to use ultra filtration as an acceptable procedure during the cheese-making process. Consequently, milk that has been ultra-filtered as an integral part of the cheese-making process is acceptable as a component of a standardized cheese, according to FDA. In 1999 and 2000, organizations representing cheese makers petitioned FDA to amend its cheese standards to expand its definition of milk to include wet ultra-filtered milk. The industry petitioners requested permission to use wet ultra-filtered milk from external sources as an ingredient in standardized cheeses because it would increase the efficiency of cheese manufacturing and would explicitly recognize filtered milk products as interchangeable with other forms of milk. One of the industry petitioners, who had also asked FDA to allow the use of the dry ultra-filtered milk in standardized cheeses, later withdrew this part of the request when U.S. milk producers raised concerns that increased imports might displace domestic milk products. FDA has not yet acted on the petitions. Specific data on U.S. imports of ultra-filtered milk do not exist because these imports are included in the broader classification of milk protein concentrates. Milk protein concentrate imports increased 56-fold from 1990 to 1999. In 1999, they came primarily from New Zealand, Ireland, Germany, Australia, the Netherlands, and Canada. Milk protein concentrates are used as ingredients in cheese, frozen desserts, bakery products, and sports and other nutritional supplement products. The United States has no quota restrictions on milk protein concentrate imports, and duties are low. FDA officials told us that these imports pose little food safety risk and therefore receive minimal monitoring. U.S. milk protein concentrate imports grew from 805 metric tons in 1990 to 7,288 metric tons in 1995 to 44,878 metric tons in 1999 (see fig. 1). Imports almost doubled in 1999 alone. The volume of imported milk protein in these concentrates was approximately equivalent to 0.8 percent to 1.8 percent of the total U.S. production of milk protein in 1999. The estimate’s range reflects the fact that imported milk protein concentrates may contain between 40-and 90-percent protein. The U.S. Customs Service does not collect data on the protein percentage of milk protein concentrate imports. The total number of countries exporting milk protein concentrates to the United States grew from 4 to 16 from 1990 to 1999. (See app. III.) Australia was the only country to export milk protein concentrates in each of the 10 years. Figure 2 shows the growth in imports for each major exporter and other countries from 1995 to 1999. The share of imports among the six largest exporting countries rose from 75 to 95 percent during this 5-year period. Although the U.S. Customs Service does not categorize its data on milk protein concentrate imports according to the manufacturing process used, representatives of Australian and New Zealand exporters assured us that their milk protein concentrate exports were all made using ultra filtration. Conversely, Canadian government officials said all of their country’s milk protein concentrate exports to the United States are made by blending milk proteins. U.S. and foreign industry executives told us that U.S. milk protein concentrate imports rose rapidly in recent years primarily because of (1) the relationship between the U.S. and international prices of milk protein, especially nonfat dry milk, and (2) the growth of the U.S. nutritional foods industry and many other new products using milk protein concentrates. According to these executives, international milk prices were below U.S. milk prices in recent years, giving U.S. dairy food manufacturers a financial incentive to substitute imported milk protein concentrates for domestic milk in products such as nonstandardized cheese. This price differential primarily stimulated U.S. imports of milk protein concentrates having lower percentages of protein—between 40 and 56 percent. More recently, U.S. demand for these milk protein concentrates has decreased, according to an Australian exporter, because the international price of milk protein is near the U.S. price. The strong growth of the U.S. nutritional foods industry has created new demand for high-protein milk protein concentrates that are 70- to 85- percent protein. Representatives of Australia and New Zealand exporters told us that this industry grew out of extensive research and development to create nutritional supplements for athletes, the elderly, and health conscious individuals. Milk protein concentrates provide an important source of protein in these nutritional products. Because high-protein milk protein concentrates are often customized for use in specific end products, their producers and exporters can sell them at higher prices than the equivalent amount of domestic milk protein, the exporters said. Despite their higher prices, the demand for these specialized high-protein products in the United States is strong. Industry executives noted that high-protein milk protein concentrate imports have not displaced domestic milk supplies because they are filling the growing demand for new nutritional products. In addition, a trade association representative and an academic expert noted that economic disincentives have prevented U.S. production of dry milk protein concentrates. Federal agencies and industry trade associations do not collect data on U.S. companies’ use of imported milk protein concentrates because this information is considered proprietary. According to milk protein concentrate exporters, U.S. cheese, frozen dessert, bakery, and nutritional foods industries primarily use the dry milk protein concentrate imports. In particular, dry milk protein concentrates containing lower levels of protein—42 to 56 percent—can be added to the raw milk used to make cheese, ensuring a consistent composition regardless of the seasonal variations in milk. Various concentrations of milk protein are also used in ice cream and other frozen desserts, bakery and confection products, and nonstandardized cheese. Milk protein concentrates containing higher protein levels—70 to 85 percent—are chiefly used in sport-, adult-, and hospital-nutrition products. Concentrates containing 90-percent protein are especially useful for manufacturers seeking lactose- and sugar-free claims for their products, according to a major exporter. (See app. IV for more details on the composition and uses of dry milk protein concentrate imports provided by some exporters.) The U.S. Customs Service and FDA share responsibility for monitoring milk protein concentrate imports for compliance with trade or food safety requirements. Unlike nonfat dry milk imports, which have less than a 40 percent protein content, the United States does not use a tariff-rate quota to restrict the quantity of milk protein concentrate imports. The United States imposes a duty of $0.0037 per kilogram on all milk protein concentrate imports except Canadian imports, which are duty-free under the North American Free Trade Agreement. The milk protein concentrates classification, which is intended to include all nonfat dry milk powder containing between 40 and 90 percent protein regardless of its method of production, allows a broad range of milk protein concentrates to enter the United States, according to the U.S. Customs Service. FDA and USDA’s Food Safety and Inspection Service are responsible for ensuring that imported food products are safe, wholesome, and properly labeled. FDA and USDA work with the U.S. Customs Service to ensure the safety of imported food products by monitoring and testing samples of imported foods. Customs uses a computer system containing information provided by the milk protein concentrate importers and FDA-developed screening criteria to determine which shipments may be automatically released and which should be subjected to inspection or laboratory testing. Products such as milk protein concentrates, which are believed to pose minimal safety risks, are frequently released automatically. FDA annually inspects or conducts laboratory analyses on less than 2 percent of all types of imported food shipments. FDA officials told us that they have little concern about the safety of dry milk protein concentrates because the products are treated with heat during pasteurization and drying, which kills pathogens. In addition to screening milk protein concentrate imports, the United States has agreements with Australia, Belgium, Denmark, France, Ireland, the Netherlands, New Zealand, Norway, and Sweden regarding dry milk and milk protein imports. The agreements are to ensure that these countries adhere to FDA’s food safety regulations, thereby minimizing the need for FDA to inspect these imports. No country has reached a broader agreement with the United States that their entire food safety system is equivalent to the United States thus enabling FDA to apply fewer resources to screening their imports. Dairy products, including milk protein concentrate products, will be subject to a not-yet-implemented “veterinary equivalency agreement” with the European Union and its 15 member countries. This agreement would provide a framework for the future equivalence of the European Union. Many U.S. cheese plants produce and use wet ultra-filtered milk to make standardized and nonstandardized cheeses, according to industry executives. However, federal and industry sources could not provide data on the amount of wet ultra-filtered milk produced domestically or on its use. USDA and state officials told us that 22 dairy manufacturing plants nationwide and 4 large dairy farms in New Mexico and Texas have the capacity to make wet ultra-filtered milk. Most of the ultra-filtered milk is used within the dairy manufacturing plants to make cheese, although some is transported to other plants for use. The milk concentrated at on-farm ultra-filtration plants is transported mainly to cheese plants in the Midwest to make standardized cheese or other products. Data are not routinely collected on the amount of ultra-filtered milk produced by U.S. cheese plants or other food processors for internal use or for shipment elsewhere, according to USDA and FDA officials and industry executives. USDA’s Agricultural Marketing Service (AMS) staff, which oversees the administration of milk marketing in 11 regions across the United States, collects data on the intended use of the milk but not on intermediate products, such as ultra-filtered milk, that are often produced and used in making cheese. Similarly, AMS staff said that ultra-filtered milk produced in one plant for use in another is included with other bulk milk products and not tracked separately. Trade association executives told us that they have no data on the amount of wet ultra-filtered milk U.S. dairy manufacturing plants produced and used. Trade association staff said that manufacturers would probably not respond to a request for such data because the information is considered proprietary and because of concern surrounding the petitions to use wet ultra-filtered milk now before FDA. Executives involved with the relatively new on-farm production of ultra-filtered milk provided overall annual production data, which are discussed below. Many U.S. cheese-making plants have adopted ultra filtration of milk as part of the cheese-making process under the provisions in FDA’s standards of identity regulations allowing for “alternate make” procedures for many of the standardized cheese and related cheese products. The “alternate make” procedures accommodate innovation by allowing these standardized cheeses to be made by any procedure that produces a finished cheese having the same physical and chemical properties as the cheese prepared by the traditional process. Filtration removes the liquid components of milk that would otherwise be removed in the traditional process when whey is separated from cheese curd. Proponents of ultra filtration state that the cheese produced is also nutritionally equivalent. The goal of ultra-filtered milk producers is to create the ideal combination of milk solids (i.e., protein and fat) for the particular style of cheese. AMS’ milk marketing staff provided a list of milk processing plants that have ultra-filtration equipment for milk in the 47 states covered at least in part by federal milk market orders. Three states—California, Alaska, and Hawaii—are not covered by federal regulation. We contacted officials in California—a large dairy state that regulates its dairy industry separately— to acquire similar information. The 48 states reported a combined total of 22 dairy manufacturing plants with ultra-filtration equipment for milk. AMS and California officials reported that at least five of these plants transported a portion of their ultra-filtered milk product to other plants. They further stated that it was possible for cheese makers to use their ultra-filtration equipment to concentrate the whey byproduct from the cheese-making process rather than to concentrate the milk entering the cheese-making process. AMS officials said that, to the extent they were aware, the transportation of ultra-filtered milk between manufacturing plants typically involved transfers between facilities of the same company. The American Dairy Products Institute and the National Cheese Institute of the International Dairy Foods Association have petitioned FDA to amend its standards of identity for cheese to include wet ultra-filtered milk in the definition of milk allowed in standardized cheese. According to the American Dairy Products Institute, ultra-filtration makes cheese manufacturing more efficient using new technology and may benefit consumers if cost savings are passed on. It also allows more efficient movement of milk from areas with an excess of fluid milk to areas with an insufficient supply, the American Dairy Products Institute said. The National Cheese Institute noted that the “alternate make procedure,” already included in the regulations for some of the standardized cheeses, provides a legal basis for the use of filtered milk in the manufacture of standardized cheese. However, the institute wants to see the standards amended to explicitly recognize ultra-filtered milk in the standards’ definition of milk. By explicitly recognizing ultra-filtered milk as milk for cheese manufacturing, FDA would allow manufacturers to use ultra- filtered milk in the standardized cheeses that do not include “alternate make procedure” provisions. The National Cheese Institute states that the greater use of ultra-filtered milk would help manage seasonal imbalances in the milk supply in various regions and in the demand for cheese. The institute said the lower hauling costs for filtered milk have enabled cheese makers to buy milk from distant regions and meet their needs for manufacturing, especially when regional milk supplies are disrupted by adverse conditions. FDA said it has exercised enforcement discretion on ultra-filtered milk, and has not enforced the standards of identity against cheese plants that use wet ultra-filtered milk produced outside of their plants. In 1996, T.C. Jacoby & Co., a St. Louis broker of dairy products, requested that FDA allow the use of ultra-filtered milk from an on-farm ultra- filtration plant in New Mexico to Bongards Creamery of Bongards, Minnesota, to make cheddar cheese. The broker also raised the issue of how to label the cheese to indicate the ultra-filtered milk ingredient in the final cheese product. FDA responded that the ultra-filtered milk could be used by Bongards to make cheddar cheese as long as the cheese was nutritionally, physically, and chemically the same as cheese produced traditionally. FDA allowed the label of the cheddar cheese to state that “milk” was an ingredient, provided that the cheddar cheese manufactured from it is equivalent. FDA allowed a pilot project for one farm and one cheese plant. The joint venture involving Jacoby & Co. subsequently expanded its production of ultra-filtered milk to three additional farms and its sales to manufacturers in Idaho, Illinois, Iowa, Minnesota, North Dakota, Ohio, Pennsylvania, South Dakota, and Wisconsin. FDA is considering the petitions but has taken no action to revise its standards of identity to reflect this use of ultra-filtered milk. The joint venture’s dairy, Select Milk Producers Inc., ultra-filters unheated whole raw milk on three farms in New Mexico and one in Texas. The process reduces the volume and weight of the whole milk the dairy starts with and reduces transportation costs for shipping it to manufacturers. The joint venture, which first sold wet ultra-filtered milk in 1997, reported sales of approximately 150 million pounds of ultra-filtered milk in 2000, mainly for making standardized cheeses. On-farm ultra filtration of milk removes two-thirds of the liquid components of the milk—mainly water—to greatly reduce the costs to transport the ultra-filtered milk to market. For example, company officials noted one shipment for which the costs were reduced from $4.50 per hundredweight of milk to $1.20 for the remaining filtered milk. They added that this cost advantage is justified only for long-distance hauling, however, because the capital costs for installing ultra-filtration equipment are high. (See app. V for the composition of the various concentrates of wet ultra-filtered milk.) FDA relies on its own inspections and those conducted by the states under contract or partnership agreements to enforce its standards of identity regulations in about 1,000 cheese-making plants across the country. In fiscal year 1999, FDA inspected nine cheese-making plants for compliance with food labeling and economic regulations, which include checking compliance with the standards of identity for cheese. None of these inspections were done exclusively to monitor for compliance with standards of identity, and data indicating the number of these inspections that actually covered the standards of identity were not available. Similarly, the states conducting inspections on FDA’s behalf did not exclusively inspect for the identity standards for cheese. In fiscal year 1999, FDA and state inspectors reported no violations for the use of imported ultra-filtered milk or milk protein concentrates to make standardized cheese. In addition, states conduct their own inspections of cheese plants for compliance with standards of identity requirements under state law. For example, in 2000, Vermont inspectors found two cheese plants using imported milk protein concentrates to make standardized cheeses in violation of federal and state regulations. Vermont issued warning letters and the plants discontinued this use. FDA reported that its own inspections of cheese-making plants for compliance with FDA’s food labeling and economic regulations, which include the standards of identity for cheese, are relatively infrequent. In fact, they accounted for 9 of the total 499 domestic inspections for composition, standards, labeling, and economics regulations in all types of food manufacturing plants during fiscal year 1999. FDA said none of the nine inspections in cheese plants was done specifically to check for compliance with standards of identity on cheese. FDA also said that the agency devoted 0.7 staff year during fiscal year 1999 to FDA’s food labeling and economic regulations for cheese. However, FDA reported that its inspectors and state inspectors working for FDA in fiscal year 1999, inspected about 300 of approximately 1,000 cheese-making plants throughout the United States for a variety of other purposes. FDA inspected 108 plants on its own. FDA officials said that states inspected 65 cheese plants under partnership agreements, 125 cheese plants under 37 contracts, and 2 under both a state partnership and contract. Overall, FDA reported inspections of about 3,500 of about 22,000 food manufacturing plants in fiscal year 1999. To increase the number of inspections of food manufacturing firms, FDA has contracts or forms partnerships with state agencies to help carry out monitoring responsibilities relating to food safety and quality. FDA provides its compliance policies and inspection guidelines to state inspectors and sometimes conducts joint inspections with state inspectors. In addition, states such as Wisconsin and Vermont have adopted FDA’s cheese standards of identity as their own standards under state law. In fiscal year 2000, FDA had contracts with 37 states to cover food inspections. Under these contracts, FDA paid states to conduct and report on food inspections of all types. State officials then inspected locations under the state or FDA authority. The number of completed inspections to check for compliance with the standards of identity for cheese, however, was not available. Officials at Wisconsin’s Department of Agriculture, Trade, and Consumer Protection told us they worked closely with FDA on contracted inspections, meeting annually with FDA officials to plan and coordinate their inspection efforts to avoid duplication. At these meetings, FDA provides state authorities with a list of the dairy establishments for Wisconsin inspectors to visit during the year. In addition, for each inspection done under its contract with FDA, Wisconsin inspectors complete a FDA inspection report describing the inspection results. Wisconsin officials reported that they did 82 inspections under the contract with FDA in fiscal year 1999 and 62 in fiscal year 2000. Wisconsin officials told us that the state had 142 cheese-making plants in 1999 that produced many types of cheese. Wisconsin dairy inspectors check cheese plants for safety and sanitation, food composition and labeling regulations—-including standards of identity—-and to collect product samples. Wisconsin officials said their inspectors make on-site visits to cheese plants on a semiannual basis, taking a total of 36 samples each year for laboratory analysis of microbes, moisture content, and comparison of ingredients with FDA and Wisconsin standards. Wisconsin estimated that it expended 3.1 and 2.8 staff years in fiscal years 1999 and 2000 respectively, on routine inspections of cheese plants, not including nonroutine and contract inspections. State officials did not have the data to estimate the time spent specifically on standards of identity. FDA and the states also have 15 partnership agreements related to FDA’s regulation of dairy products. Under these partnerships, FDA and the states (or food-related organizations) collaborate on such efforts as training inspectors and sharing test results. FDA does not fund activities carried out by states under its partnership agreements, and the states bear the responsibility for handling any violations. In addition to these efforts, the states conduct their own inspections under state law, which can include the standards of identity. For example, both Vermont and Wisconsin routinely inspect plants for compliance with state laws and regulations, and both have adopted FDA’s standards of identity as part of their states’ food safety and quality laws. Vermont officials told us that the state has no formal working relationship, such as a partnership or a contract, with FDA relating to dairy inspections. However, Vermont’s dairy inspectors coordinate with FDA on dairy matters. Vermont officials stated that about 2.0 staff years are used annually to inspect about 40 dairy plants, 28 of which are cheese making. Vermont’s officials inspect the dairy plants for sanitation and cheese standards of identity and to collect samples. Tests of samples for microbes and animal drugs are done about once a month at the larger dairy plants. The inspectors visit the dairy plants on a quarterly basis and the larger plants about 20 times per year, according to Vermont officials. FDA and the two states we contacted—Vermont and Wisconsin—-report few violations of FDA’s cheese standards of identity. In fiscal year 1999, FDA reported that no violations involving the use of ultra-filtered milk in standardized cheese in federal and the contracted state inspections. Likewise, Wisconsin officials told us that they had found no cheese standards of identity violations relating to the use of ultra-filtered milk in cheese in the past few years. They did report a December 2000 incident in which a cheese plant was found to be using milk protein concentrate in nonstandardized ricotta cheese. While the use of the ingredient was not a violation of state or federal standards, the product’s label did not identify the ingredient as required by law. The plant stopped using the milk protein concentrate until the label could be corrected, state officials reported. In 2000, Vermont inspectors found two cheese plants using imported milk protein concentrate to make cheeses covered by FDA’s standards of identity in violation of federal and state law. Vermont officials wrote letters to the plants warning that this ingredient was not permitted by the standards. Vermont officials said the plants discontinued its use and the cases were closed. We provided FDA with a draft of this report for its review and comment. FDA generally agreed with the report and provided some specific comments, which we have incorporated into the report as appropriate. FDA’s comments and our responses are in appendix VI. To identify the trends in ultra-filtered milk imports into the United States between 1990 and 1999, we obtained data compiled by the U.S. Census Bureau from the U.S. Customs Service on annual imports of milk protein concentrates, which includes ultra-filtered milk. To identify any quantity, tariff, or other trade restrictions applicable to imported ultra-filtered milk, we reviewed the U.S. Harmonized Tariff Schedule and interviewed USDA, Customs, and FDA officials and representatives of domestic and foreign dairy trade associations and reviewed relevant reports and publications. To identify the uses of dry ultra-filtered milk and milk protein concentrates in the manufacture of cheese and other products in the United States, we obtained information from trade association representatives, domestic and foreign company executives, and federal officials. To identify the use of domestically produced ultra-filtered milk in the manufacture of cheese and other food products in the United States, we reviewed relevant FDA standards of identity and other regulations and available published reports. We also interviewed USDA officials; California, Vermont, and Wisconsin state officials; trade association representatives; company executives; and academicians. To identify FDA’s and state agencies’ efforts to enforce the federal standards of identity regulations, particularly the use of ultra-filtered milk in cheese production, we interviewed officials of USDA, FDA, Wisconsin, and Vermont regarding the extent of their activities and amount of staff resources used to monitor the standards. We conducted our review from August 2000 through February 2001 in accordance with generally accepted government auditing standards. We are sending copies of this report to the congressional committees with jurisdiction over dairy products; the Honorable Ann M. Veneman, Secretary of Agriculture; the Honorable Dr. Bernard Schwetz, Acting Commissioner of the Food and Drug Administration; the Honorable Charles W. Winwood, Acting Commissioner, U.S. Customs Service; the Honorable Mitchell E. Daniels, Jr., Director of the Office of Management and Budget; and other interested parties. We will make copies available to others on request. If you have any questions about this report, please contact me or Richard Cheston, Assistant Director, at (202) 512-3841. Key contributors to this report were Diana P. Cheng, Jonathan S. McMurray, John P. Scott, and Richard B. Shargots. Table 1 below shows the cheeses and related cheese products by section number covered by the Food and Drug Administration’s (FDA) Standards of Identity regulations (21 C.F.R., Part 133, Subpart B). Because these regulations do not identify ultra-filtered milk as an approved ingredient, manufacturers of standardized cheeses and related cheese products cannot use ultra-filtered milk that is produced outside the cheese-making plant. (FDA has allowed an exception to this for a pilot project producing ultra-filtered milk on a farm in New Mexico for use in a Minnesota cheese plant.) If milk protein concentrates are used in a cheese product, then the product cannot bear the name of a standardized product, which is listed below. However, milk protein concentrates can be used as ingredients for nonstandardized cheese products not listed, such as feta cheese and pizza cheese. FDA also has standards of identity for many other product types, including milk and cream, frozen desserts, bakery, macaroni and noodles, and frozen vegetables. Cheese making combines an ancient art with scientific knowledge to manufacture uniform products by removing water and retaining the desirable solids in milk. Prior to making cheese, cheese makers test the quality of the milk. Then they may adjust for seasonal variations in the composition of milk, specifically milk proteins, to ensure that uniform milk is used to manufacture consistent cheese throughout the year. Traditionally, cheese makers use nonfat dry milk or liquid condensed milk as the chief ingredient to adjust the milk proteins but these have limitations due to the lactose content in these forms of milk. Ultra-filtered milk provides cheese makers with an alternative product for this purpose. Ultra-filtered milk concentrates the proteins by removing the water and lactose in milk, permitting greater efficiency in cheese making. Because the starting ingredients contain less liquid, the volume of whey (primarily water, lactose, whey proteins, and minerals) removed during cheese making is reduced and less effort and time are spent to expel the liquid from the cheese curds leading to its transformation into cheese. Figure 3 is a simplified diagram of the ultra-filtration process that enlarges a portion of the process to show how milk components are separated. In ultra filtration, a filter (membrane with minute pores) retains the larger molecules (fat and protein) and allows the smaller molecules (water, lactose, and some minerals) to pass through. Although vitamins are a component in milk, they are not shown in the figure because they are found within the fat and water components. Ultra filtration is not 100- percent efficient because some milk flows parallel to the filter pushed by pressure and not all of the milk comes in contact with the filter. Therefore, wet ultra-filtered milk will contain some water, lactose, and minerals. Because of practical limitations on the amount of ultra-filtered milk that can be used in making cheese, ultra-filtered milk is normally used to supplement skim or whole milk used to make cheese. Cheese-making experts said that the majority of cheese vats in U.S. plants are not designed to use only ultra-filtered milk, which is thicker than skim or whole milk. A high proportion of ultra-filtered milk would cause the equipment to malfunction. In addition, because highly concentrated ultra- filtered milk is not nutritionally equivalent to fluid milk, it could not be used as the sole ingredient in cheese. If cheese were made entirely from ultra-filtered milk, its texture, composition, and other characteristics would be different from cheese made traditionally. Although experts believe that these limitations can be addressed, the limitations currently prevent cheese makers from making cheese entirely from ultra-filtered milk at a concentration greater than “2X” in which half of the water is removed leaving twice as many solids (fat and protein) as compared to whole milk. Figure 4 shows a flowchart of the cheese-making process. Ultra-filtered milk can be used to maintain consistent levels of fat and protein components in the raw milk used to make cheese, ensuring that cheese quality is the same throughout the year. It can also be used in larger quantities to increase the total solids (fat and proteins) in the raw milk, resulting in larger yields. Cheese making involves transforming milk proteins into solid lumps (curds), separating the curds’ solids from the liquid (whey), shaping or pressing these curds into molds, and aging the shaped curds. Table 2 shows U.S. imports of milk protein concentrates between 1990 and 1999. Between 1990 and 1994, U.S. imports of milk protein concentrates increased 15-fold, and the number of suppliers grew from 4 countries to 11 countries. From 1995 to 1999, U.S. imports of milk protein concentrates increased 6-fold. Over the 10-year period, U.S. imports of milk protein concentrates increased 56-fold. Australia is the only country that exported milk protein concentrates to the United States in each year during this 10- year period. Table 3 provides a general overview of the milk protein concentrate (MPC) products made from skim milk and their suggested uses, as provided by their distributors. It is not a comprehensive list because the uses for milk protein concentrate are reportedly expanding and developing, and only a few of the exporters we contacted opted to provide this information. Milk protein concentrates are typically described by their approximate protein content expressed as a percentage. For example, MPC 42 contains 42 percent protein based on dry weight. The other components in the product vary depending on its producer and customization of the products to meet customer specifications. Table 4 provides the composition of various concentrations of wet ultra- filtered milk made from whole milk. The composition of ultra-filtered milk depends on the composition of the raw milk, which may vary depending on the season in which the milk was produced. Because ultra filtration removes liquids and concentrates the protein and fat components of milk, the table indicates the degree to which solids are concentrated. For example, in a “2X” concentration, half of the water is removed leaving twice as many solids (i.e. fat and protein) compared with whole milk. The following are GAO’s comments on the Food and Drug Administration’s written response to our draft report dated February 2, 2001. 1. We have substituted these sentences as suggested. 2. We have added language to the footnote and to appendix V to explain that we are referring to the amount of “true” protein in whole milk, which is approximately 3 percent. While some sources in literature cite the higher value of “crude” protein, we feel “true” protein is the best value to use in our example. According to academic experts, the total or “crude” protein in milk that FDA refers to is estimated from measuring the total nitrogen content of milk. The total amount of nitrogen comes from both protein and non-protein sources. The experts noted that the measurement of “crude” protein is inaccurate because test equipment does not measure the amount of non-protein nitrogen precisely. By testing for “true” protein only, which electronic testing equipment can accurately detect, this measurement error is corrected. In addition, USDA’s AMS, in its 1999 decision on milk market order reform, stated that the use of total or “crude” protein measurement overstates the amount of protein in milk by the amount of non-protein nitrogen, which has little or no effect on dairy product yields. Therefore, AMS decided that milk should be priced under federal milk orders on the basis of its true protein content. 3. We have revised the sentence as suggested.
The ultra-filtration process for milk, developed in the 1970s, removes most of the fluid components, leaving a high concentration of milk protein that allows cheese and other manufacturers to produce their products more efficiently. No specific data on amount of ultra-filtered milk imports exists because these imports fall under the broader U.S. Customs Service classification of milk protein concentrate. Exporters of milk protein concentrates face minimal U.S. import restrictions, and the Food and Drug Administration (FDA) believes the milk protein concentrates pose minimal safety risks. Similarly, there is little data on the amount and use of domestically produced ultra-filtered milk in U.S. cheese making plants. According to the Department of Agriculture and state sources, a total of 22 dairy plants nationwide and five large dairy farms in New Mexico and Texas produce ultra-filtered milk. The plants primarily produce and use ultra-filtered milk in the process of making cheese. The five farms transport their product primarily to cheese-making plants in the Midwest, where most is used to make standardized cheeses. FDA relies on its own inspections, and those it contracts with 37 states, to enforce its standards of identity regulations. In addition to these federally funded inspections, some states conduct their own inspections of cheese plants for compliance with standards of identity requirements under state law.
Medicare provides health insurance coverage for approximately 37 million elderly and disabled people under two parts: part A, primarily hospital insurance, and part B, supplementary insurance. HCFA, which administers the Medicare program, contracts with insurance companies (called “fiscal intermediaries” for part A and “carriers” for part B) to process, review, and pay claims for covered services. Payments for medical supplies are made under either of Medicare’s two parts. Medical supply claims submitted by hospitals or other institutions, such as nursing homes or home health agencies, are paid by 43 local fiscal intermediaries. Medical supply claims submitted by noninstitutional providers, such as physicians or medical supply companies, are paid by carriers. Thus, the same supply item can be billed to Medicare for an individual under two completely different payment systems, one for part A and another for part B. Under part A, the payment is generally made on the basis of reasonable costs. Under part B, the payment is made using a fee schedule established by HCFA. Historically, part B fraud and abuse have plagued Medicare, and HCFA has recently reformed its operations. In October 1993, acting under specific statutory authority, HCFA started transferring carrier claims processing responsibility for durable medical equipment (DME); prosthetics; orthotics; and medical supplies, including surgical dressings, from 32 local carriers to 4 regional carriers. These carriers are commonly referred to as durable medical equipment regional carriers (DMERC). In March 1994, after lobbying by suppliers and manufacturers, among others, HCFA greatly expanded its surgical dressing benefit, broadening the types of dressings covered and the conditions under which they would be covered. For example, the benefit was expanded to cover payment for various types and sizes of gauze pads that Medicare previously did not cover. Also, the duration of coverage was extended from 2 weeks to whatever is considered medically necessary. DME claims have long been abused, in part, because of fundamental weaknesses in Medicare payment controls. In response to these weaknesses, HCFA has recently implemented significant changes in the processing of DME claims to reduce Medicare’s vulnerability to this particular fraud and abuse. Before DME claims processing was transferred to the 4 regional carriers in 1993, each of the 32 carriers paid DME claims, which represented a small part of the total claims each carrier processed. Under this process, HCFA did not require its contractors to implement basic controls before payment that would identify and set aside for review those claims with unusually high per-patient expenditures or improbably large quantities of supplies. Without such controls, some DME suppliers billed for equipment never delivered, higher cost equipment than delivered, or totally unnecessary equipment or supplies. Further, suppliers frequently engaged in contractor shopping. Although, they might deliver equipment or supplies to beneficiaries in one state, they would bill a contractor in another state because that contractor paid more for the items delivered or had relatively weak payment controls for the equipment or supply items. These weaknesses explain why Medicare contractors processed, without questioning, claims that later proved to be fraudulent or abusive. For example, as reported by the OIG, Medicare paid an estimated $20 million in claims for unneeded nutritional supplements approximately $5.2 million in claims for oxygen concentrators, nebulizers, medications, and tests either not needed or not delivered; approximately $500,000 in claims for unneeded transcutaneous electrical $7 million in claims for orthotic body jackets that should not have been paid. Establishing four regional carriers to process and oversee DME claims, including surgical dressings, eliminated some of the weaknesses that allowed prior abuses to flourish. The regional carriers are better able to prevent Medicare payments for unusually high medical supply claims for two key reasons. First, the ability of suppliers to shop for contractors with the highest payments and weakest controls has been eliminated. With only four regional carriers, HCFA has better standardized the amount that Medicare pays for medical supplies and the controls used to detect and prevent payment of problem claims. Claims must be submitted to the regional carrier responsible for payments in the state where the beneficiary resides rather than the carrier allowing the highest payment. Second, medical supply and surgical dressing claims can receive more attention from regional carriers than local carriers because these claims are a larger portion of the regional carriers’ workloads. As a result, the regional carriers should be better able to detect and prevent inappropriate payments for abnormally expensive surgical dressing claims. HCFA’s recent efforts to prevent abuses in medical supply claims apply only to part B claims submitted to regional carriers, which represent half of Medicare’s total medical supply payments. Claims processed by fiscal intermediaries are still subject to some of the same fraud and abuse problems that have historically plagued medical supply claims. Further, despite the improvements, medical supply claims submitted to the regional carriers are still subject to significant abuse. Fiscal intermediaries pay medical supply claims without knowing specifically what they are being asked to pay for on behalf of beneficiaries. The claims submitted by providers have no detailed information that would allow fiscal intermediaries to assess the claims’ reasonableness. This lack of detail exists because HCFA guidance allows providers to bill all medical supplies under 10 broad codes; billed items are not listed by type or amount. A code frequently used to record medical supplies is code 270 (medical/surgical supplies and devices-general classification), which we found included many different items, such as a $21,437 pacemaker, a $.75 sterile sponge, and even daily rental charges of $59 for an aqua pad. Consequently, unless fiscal intermediaries identify these claims for review and request additional documentation before payment, they will pay for the claims without knowing what the specific purchase was or whether it was covered or medically necessary. For example, a fiscal intermediary processed a code 270 claim for more than $21,000 without any review. At our request, the fiscal intermediary asked the provider to submit medical records and a list of items billed under this claim. After the fiscal intermediary reviewed the documentation to support this claim, it denied more than $13,000 in charges because the medical records contained no doctor’s orders for the billed items. In total, we requested the fiscal intermediary to obtain the medical records and an itemized list of supplies supporting 85 high-dollar medical supply claims submitted by 38 providers during a 1-month period. All of these claims had been processed without any review. The results of the fiscal intermediary’s subsequent review are as follows: Eighty-nine percent of the claims for which documentation was received and reviewed (42 of 47) should have been totally or partially denied. Almost 61 percent of the dollars billed for medical supplies ($193,147 of $316,824) should have been denied for various reasons, including, among others, items not medically necessary, items not covered by Medicare or covered as part of routine or administrative costs, no documentation of supplies used, no doctor’s orders, and no itemized list of supplies. (See app. II for detailed information.) Forty-five percent of the claims for which documentation was not returned (38 of 85), totaling $487,412, was subsequently denied. One claim was determined to be potentially fraudulent because the beneficiary’s condition required none of the $2,404 in medical supplies billed. A further review, by the fiscal intermediary’s fraud and abuse unit, of the same provider’s claims for this beneficiary for the previous 5 months resulted in the identification of an additional $20,393 in potentially fraudulent medical supply charges. Fiscal intermediaries obtain similar or better results when they conduct their own prepayment reviews of medical supply claims. For example, a fiscal intermediary used a computerized payment control to identify all medical supply claims (code 270) in excess of $500 submitted between October and December 1993. After reviewing documentation supporting the claims, the fiscal intermediary denied 69 percent of the dollars billed ($59,542 of $86,046). The Omnibus Budget Reconciliation Act of 1993 (OBRA 1993) partially addressed the problem of providers not submitting documentation that would allow fiscal intermediaries to adequately assess medical supply claims. OBRA 1993 provided essentially for certain supplies, including surgical dressings, to be paid on the basis of the fee schedule that regional carriers use for the part B program. As a result, providers must submit to fiscal intermediaries claims that itemize the specific supplies and quantities being billed. Because the provision does not apply to all medical supplies, many other types of medical supplies are still billed using broad codes that do not adequately describe the type and amount of such supplies. The provision does not at all apply to surgical dressings supplied by a home health agency. As a result, home health agencies, which billed Medicare for almost half a billion dollars of medical supplies in fiscal year 1994, can continue to submit claims for surgical dressings without the detailed itemization required of other types of providers billing for these items. For Medicare part B claims, the regional carriers have not adopted important fraud and abuse controls for many surgical dressing items. Specifically, the 29 surgical dressings covered by the expanded Medicare surgical dressing benefit have no formal medical policies specifying the conditions under which payment is to be made. Without these policies, regional carriers cannot implement systematic controls to identify questionable claims for review. As a result, they pay many high-dollar, high-volume claims without review. We found that the utilization level—the number of dressings billed per beneficiary—was, on average, nearly three times higher for the newly covered dressings—that is, those for which no formal medical policies apply. Moreover, on average, the dressings that have no medical policies exceeded the expected utilization level, as determined by recommended industry and draft regional carrier standards. In some cases, the average number of dressings billed per beneficiary was four times greater than expected. Formal medical policies for the newly covered dressings cannot be adopted until the surgical dressing industry and others have been allowed to comment on them. HCFA expanded surgical dressing coverage and instructed regional carriers to pay for newly covered surgical dressings before the carriers had a chance to develop new medical policies. As a result, most claims for surgical dressings for which no medical policies apply are being paid and will continue to be paid without a routine review to determine whether the amount of dressings billed is reasonable or medically necessary. HHS estimates that this process will be completed and medical policies will be effective October 1, 1995. We asked officials at one regional carrier to identify high-dollar claims it paid. While the claims the carrier identified for us were subject to some review before payment, the review only applied to those dressings that had a formal medical policy. As a result, thousands of dollars were paid for surgical dressings that were not needed and the claims for which were not subject to review because they did not have formal medical policies. For example, in the case of one beneficiary, the carrier—over 3 months and on the basis of a formal medical policy—had denied over $8,500 worth of claims for dressings and sterile saline before paying $23,000. However, in performing the review we requested, the carrier determined that only $1,650 of the $23,000 for dressings should have been paid because the beneficiary’s condition did not appear to justify the use of large quantities of dressings. The $23,000 had been paid without review for medical necessity because no formal medical policies applied to most of the surgical dressings. Therefore, no internal policies were in place to trigger a review of these dressings. Without such policies, suppliers have exploited Medicare with little risk of ever having to repay the program. Following are examples of this exploitation: One supplier regularly billed Medicare for 60 or more transparent films per beneficiary per month. For some beneficiaries the supplier billed for 120 or more films a month. Recommended industry standards suggest the need for no more than 24 films per beneficiary per month. Another supplier billed Medicare an average of 268 units of tape per beneficiary during a 15-month period. The average for all suppliers was 60 units during the 15-month period. Some beneficiaries received between 180 to 720 units of tape in 1 month. Using a 10-yard roll of tape, a common industry length, these beneficiaries would have been wrapped in 60 to 240 yards of tape per day. Supplier abuse is not limited to surgical dressings; other medical supply items for which no formal policies or systematic controls apply have also been exploited: At least four suppliers regularly billed Medicare for 30 or more drainage bottles a month for each beneficiary. This is 90 times more than the proposed standard of one bottle every 3 months. The number of drainage bottles billed by these suppliers was 79 percent of all bottles billed to the regional carrier. One supplier billed Medicare an average of nine urinary leg bags per beneficiary a month. For some beneficiaries, the supplier billed for one leg bag a day or 15 times more than the proposed standard of two leg bags a month. In total, this supplier billed Medicare for 50,834 leg bags or 21 percent of all leg bags billed to the regional carrier over 15 months. Medicare can pay for the same item twice because it does not have effective tests to determine whether both regional carriers and fiscal intermediaries are paying for the same surgical dressings, medical supplies, and other items. Surgical dressings and many medical supplies can be billed to either fiscal intermediaries or regional carriers. If suppliers submit claims for the same items to both types of contractors, only one should pay the claim. For example, if a fiscal intermediary pays a nursing home for surgical dressings, a regional carrier should not pay the supplier for the same dressings. Conversely, if a regional carrier pays a supplier for surgical dressings, the fiscal intermediary should not pay the nursing home that used the dressings. Medicare does not have an effective control to prevent both types of contractors from paying for the same medical supplies or surgical dressings. As part of Medicare’s claims processing system, all claims received by contractors are compared with historical beneficiary data to verify eligibility for payment and benefits. HCFA uses this system to conduct many types of computerized controls to determine if payment for the claims should be approved or rejected. The system does not check, however, to see if items paid by regional carriers have already been paid by fiscal intermediaries or whether items paid by fiscal intermediaries have already been paid by regional carriers. We identified a case in which a computerized control for duplicate items would have prevented Medicare from paying twice for the same item. In this case, the fiscal intermediary paid a nursing home for two bedside drainage bags used by a patient during a 1-month stay. A regional carrier also paid a supplier for 30 drainage bags allegedly provided to the same patient while in the nursing home. If a duplicate payment control had existed, the regional carrier would not have made the duplicate payment. Medicare’s fee schedule payments for surgical dressings are generally excessive when compared with wholesale prices, prices paid by the Department of Veterans Affairs (VA), and even retail prices. Overall, we estimate that HCFA could save substantial amounts if its fee schedule was calculated on the basis of lower available prices. For example, as shown in table 1, if HCFA paid wholesale prices for 44 surgical dressings, total savings would be almost $20 million or almost 35 percent of what it now pays. Potential savings for just nine dressings would be more than $9 million if HCFA paid at the lowest rate, that which VA paid for dressings. We even identified potential savings of more than $2 million for nine surgical dressings if HCFA paid at the lowest retail rates found at four Los Angeles-area drug stores. HCFA’s method of calculating the fee schedule for surgical dressings caused these high payments. OBRA 1993 required HCFA to establish a fee schedule for surgical dressings by computing the average historical charges for the dressings. Because of the expansion of the surgical dressing benefit, however, HCFA did not have data on historical charges. Instead, HCFA used a gap-filling process to establish the fee schedule: HCFA used retail surgical dressing supply catalogs to create a price list for each type of covered surgical dressing. The price of the median-priced dressing for each type became the fee schedule price. For example, HCFA identified 13 different alginate dressings 16 square inches or less (HCFA Common Procedure Code K0196). The retail prices of the dressings ranged from $3.14 to $19.07. The fee schedule price was set at $6.62, the median-priced or sixth dressing on HCFA’s list. The lowest wholesale price for this type of dressing is $1.88. If HCFA makes a mistake in calculating the fee schedule, it can correct the mistake (for example, by using wholesale prices instead of retail prices). However, HCFA may not change the methodology for determining the fee schedule nor may it adjust the fee schedule if dressing prices decrease. Therefore, if, as one HCFA official told us, the prices of surgical dressings fall as more manufacturers produce the many types of surgical dressings that HCFA now pays for, HCFA cannot lower the fee schedule to reflect the change in market condition. Instead, Medicare will pay a price that is even higher, relative to the market prices, than it pays today. For certain DME items—but not for surgical dressings and other medical supplies—the Secretary of HHS may adjust prices that are inherently unreasonable. In these cases, the authority is very limited and involves a complex set of procedures that can take a long time to complete. For example, it took HCFA nearly 3 years to reduce the price it was paying for home blood glucose monitors from a nationwide range of $144 to $211 to $58.71, even though they were widely available for about $50 and, in some cases, provided free as a means of obtaining customers for the disposable items associated with this test equipment. Because of the time and resources involved, HCFA only uses this process for one item at a time. Before 1987, individual Medicare carriers had the authority to increase or decrease prices to reflect local market conditions. The process for doing so, which included notifying area suppliers and publishing the new prices, could be completed in less than 90 days. If HCFA or the carriers had the authority to adjust excessive prices in a timely manner, they could save millions in program dollars. A HCFA official told us, however, that it devotes no resources to routine monitoring of medical equipment and supply prices. As a result, discrepancies in price between what Medicare pays and what other large-volume buyers pay go undetected. HCFA recently created a framework that eventually will allow it to identify and begin addressing fraud and abuse associated with medical supply claims. For the first time, HCFA will have data to begin assessing the size and scope of fraud and abuse and its contractors’ performance in addressing them. In addition, these data will allow HCFA to assess options for addressing program weaknesses. HCFA’s consolidation of DME and medical supply claims processing at four regional carriers provides comprehensive national data—that were not available previously—on utilization and payments. These data will allow HCFA to identify, on a nationwide basis, DME and medical supplies that may be subject to overutilization and inappropriate billing. In 1993, HCFA also developed a programwide emphasis on data analysis. Calling its approach focused medical review, HCFA required contractors to begin identifying general spending patterns and trends that would allow them to identify potential problems. Fiscal intermediaries have started implementing this approach and have recently begun compiling and analyzing claims payment and utilization data. So far, some intermediaries have identified the different types and number of claims that Medicare may be inappropriately paying. For example, one type of review conducted by an intermediary we visited resulted in 85 percent of the claims reviewed during a 1-month period being denied—a total of $5.8 million in program savings. Moreover, some intermediaries have estimated the dollars that Medicare can potentially save by tightening prepayment review controls. The intermediary we visited identified eight other problem areas, in addition to those that it was already reviewing, that should be reviewed because of such things as precipitous increases in utilization rates. This intermediary estimated potential savings of $57 million by implementing the additional reviews, but it did not have the resources to do so. Armed with its new information from DMERCs and focused medical review program reports, HCFA is now much better positioned than in past years to provide HHS, the Office of Management and Budget, and the Congress with concrete information on contractor activities that save program dollars. This information could include, for example, explicit documentation on the savings achievable from efforts to stop paying unwarranted or overpriced claims. HCFA has taken some initial steps to address Medicare medical supply and surgical dressing payment abuses. Transferring the processing to regional carriers—and the accompanying greater standardization of payment policies and better information to detect problem claims—are important steps in combatting fraud and abuse. Medicare’s vulnerability to overpaying for surgical dressing claims will persist, however, for several reasons: Many claims for surgical dressings lack sufficient detail for Medicare fiscal intermediaries to assess what they are being asked to pay for. Medicare contractors have not yet developed the administrative capabilities to detect questionable claims for many surgical dressings. Though the same patient may receive surgical dressings paid by either a part A intermediary or part B carrier, HCFA has no controls to detect duplicate bills. Medicare’s payment rates for dressings are high compared with wholesale and many retail prices. The Secretary should direct the Administrator of HCFA to require that bills submitted to fiscal intermediaries itemize supplies; develop and implement prepayment review policies as part of the process of implementing any new or expanded Medicare coverage; and establish procedures to prevent duplicate payments by fiscal intermediaries and carriers. The fee schedule approach to setting prices provides a good starting point for setting appropriate Medicare prices. HCFA, however, needs greater authority and flexibility to quickly adjust fee schedule prices when market conditions warrant such changes. To allow Medicare to take advantage of competitive prices, the Congress should consider authorizing HCFA or its carriers to promptly modify prices for DME and other medical supplies. For this to work effectively, however, HCFA or the carriers must devote adequate resources to routine price monitoring. HHS commented on a draft of our report in a letter dated July 18, 1995 (see app. VI). In an overall comment, HHS stated that several ongoing Medicare initiatives involving the four regional carriers are already addressing the problems highlighted in this report. Specifically mentioned were the use of information from processed claims to identify for prepayment review suspicious suppliers and high-dollar, high-volume claims; prepayment screens to detect egregious utilization of a supply item; and comprehensive medical reviews of suppliers whose billing patterns indicate possible overutilization. As we have stated in this report, a number of HCFA initiatives show promise. We specifically mentioned that transferring the claims processing for DME and supplies to four regional carriers gives HCFA the ability to identify overutilization and inappropriate billing. We also mentioned that the programwide emphasis on data analysis through focused medical review identifies potential problem areas. While such initiatives are promising, we do not believe that they or the other promising activities of the four regional carriers address all the problems identified in this report. For example, HHS disagreed with our first recommendation that bills submitted to fiscal intermediaries itemize supplies. HHS stated that it had assessed the benefit of requiring providers to itemize home health supply bills and found that the additional contractor and provider cost and burden outweighed the value of the itemization. As an alternative, HHS stated that it is assessing the benefit of requiring fiscal intermediaries to suspend for prepayment review those bills with excessive charges. Also, HHS believed that it was important to note that HCFA does not pay billed charges for this type of claim. Without itemized bills, fiscal intermediaries cannot determine what type or amount of supplies they pay for. While it is true that HCFA does not pay the billed charges for this type of claim, to conclude that the cost settlement process will somehow account for all overpayments is inaccurate. Overpayments will still be made for unnecessary or excessive supplies or those not covered by Medicare. HHS concurred with our second recommendation and said that it had acted to implement it. The action described, however, appears to be in response to the past expansion of surgical dressing benefits rather than plans for new or expanded Medicare coverage. For example, although agreeing that prepayment edits should be used to prevent inappropriate payment when coverage policy changes, HHS stated that a revised regional medical review policy for the recently expanded surgical benefits will be effective October 1, 1995. HHS also stated that it is important to ensure that the regional carriers have the flexibility to establish their own edits based on aberrancies found in their region. While the policies on the expanded surgical dressing benefit need to be implemented as soon as possible to protect benefit dollars, our recommendation would require that medical policies be developed and approved before any further changes in benefit coverage are made. Without medical policies, carriers cannot establish prepayment edits for items newly covered because of changes in Medicare benefits. As we discussed, regional carriers have been paying claims for 29 newly covered dressings for nearly a year and a half without medical policy or prepayment edits—that is, without a review of the claims’ reasonableness or medical necessity. Concerning our recommendation that procedures be established to prevent duplicate payments by carriers and intermediaries, HHS stated that identifying duplicate claims is difficult when they are sent to different part A and part B contractors because the claims are submitted with different codes and supplier numbers and then processed using different payment schedules and processing systems. In what it described as an effective alternative, HHS stated that HCFA currently uses “conflict edits” through the Common Working File system to alert contractors to conflicting payment situations. For example, if part B is being billed for outpatient supplies for a specific date and part A receives an inpatient claim for the same patient covering the same period, the system generates an alert. Questionable claims are then manually reviewed before payment, according to HHS. In the future, with Medicare’s new claims processing system, the Medicare Transaction System, HHS stated that it will be simpler to identify duplicate claims because the same system will process part A and part B claims in the same format. As a result of OBRA 1993, surgical dressings can be identified with the same codes regardless of which contractor, part A or part B, processes and pays a claim. Combining a common identification code with the Common Working File’s ability to identify claims for which part A and part B contractors both receive a claim for the same beneficiary covering the same time period allows contractors to easily identify a potential duplicate payment. This ability applies to all medical supplies that use the same identification code. For example, we identified one case in which Medicare paid twice for a supply item using the same code for both the part A and part B contractor. The Common Working File duplicate payment alert, or conflict edit, does not entirely prevent Medicare from paying for the same item twice. The system generates an alert only when an institutional provider, such as a nursing home or home health agency, has billed the intermediary before the supplier has billed the regional carrier. More importantly, officials at the four DMERCs told us that they do not investigate or review claims identified by the duplicate payment alert. Instead, they pay the claims without reviewing for duplication. Concerning the matter for congressional consideration, HHS has stated that on several occasions since 1987, it has submitted legislative proposals to the Congress to simplify the process it may use to adjust or limit fee schedule amounts. HHS also made a number of technical and other comments that we considered in finalizing this report. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to the Secretary of Health and Human Services and other interested parties. Please call me on (202) 512-7119 if you or your staff have any questions about this report. Major contributors are listed in appendix VII. To identify the circumstances allowing the payment of unusually high surgical dressing claims, we interviewed OIG officials from HHS and reviewed past OIG and GAO reports on Medicare fraud and abuse problems. We also visited Transamerica, one of the many carriers that processed medical supply claims before the creation of the four regional carriers. This carrier was judgmentally selected. To determine the adequacy of Medicare’s internal controls, we visited Blue Cross of California, a fiscal intermediary that processes claims submitted by institutional providers; and CIGNA, one of the four regional carriers responsible for processing durable medical equipment (DME) and medical supply claims submitted by suppliers. These contractors were judgmentally selected. To supplement work performed at these locations and broaden our areas of analysis, we obtained information on medical supply and surgical dressing claims and payment safeguards from the remaining 42 fiscal intermediaries and three regional carriers. We also discussed the adequacy of contractors’ internal controls and obtained information about these controls from HCFA officials at HHS. In addition, we requested the two contractors that we visited, Blue Cross of California and CIGNA, to review medical records and other documentation for selected high-dollar medical supply claims to determine whether the records supported the need for services or items billed to Medicare. Further, we obtained recommended utilization standards from a trade association for medical supply distributors and a national association of specialty nurses for wound, ostomy, and continence care and compared the standards with actual utilization levels found on claims submitted by suppliers. We compared the fee schedule that Medicare uses to pay suppliers of surgical dressings with prices obtained from a wholesale surgical dressing supplier, four retail drugstores, the Department of Veterans Affairs, and a HCFA-generated surgical dressing price list. We also reviewed HCFA procedures to determine if any would prevent regional carriers and fiscal intermediaries from paying duplicate claims for medical supplies and surgical dressings. We performed our work between May 1994 and June 1995 in accordance with generally accepted government auditing standards. Elastic bandage, per roll (e.g., compression bandage) Alginate dressing, wound cover, pad size 16 square inches or less, each dressing Alginate dressing, wound cover, pad size more than 16 but less than or equal to 48 square inches, each dressing Alginate dressing, wound cover, pad size more than 48 square inches, each dressing Alginate dressing, wound filler, per 6 inches Composite dressing, pad size 16 square inches or less, with any size adhesive border, each dressing Composite dressing, pad size more than 16 but less than or equal to 48 square inches, with any size adhesive border, each dressing Composite dressing, pad size more than 48 square inches, with any size adhesive border, each dressing Contact layer, less than 16 square inches, each dressing Contact layer, more than 16 but less than or equal to 48 square inches, each dressing Contact layer, more than 48 square inches, each dressing Foam dressing, wound cover, pad size 16 square inches or less, without adhesive border, each dressing Foam dressing, wound cover, pad size more than 16 square inches but less than or equal to 48 square inches, without adhesive border, each dressing Foam dressing, wound cover, pad size more than 48 square inches, without adhesive border, each dressing (continued) Foam dressing, wound cover, pad size 16 square inches or less, with any size adhesive border, each dressing Foam dressing, wound cover, pad size more than 16 square inches but less than or equal to 48 square inches, with any size adhesive border, each dressing Foam dressing, wound cover, pad size more than 48 square inches, with any size adhesive border, each dressing Foam dressing, wound filler, per gram Gauze, nonimpregnated, pad size 16 square inches or less, without adhesive border, each dressing Gauze, nonimpregnated, pad size more than 16 square inches but less than or equal to 48 square inches, without adhesive border, each dressing Gauze, nonimpregnated, pad size more than 48 square inches, without adhesive border, each dressing Gauze, nonimpregnated, pad size 16 square inches or less, with any size adhesive border, each dressing Gauze, nonimpregnated, pad size more than 16 square inches but less than or equal to 48 square inches, with any size adhesive border, each dressing Gauze, nonimpregnated, pad size more than 48 square inches, with any size adhesive border, each dressing Gauze, impregnated, other than water or normal saline, pad size 16 square inches or less, without adhesive border, each dressing Gauze, impregnated, other than water or normal saline, pad size more than 16 square inches but less than or equal to 48 square inches, without adhesive border, each dressing Gauze, impregnated, other than water or normal saline, pad size more than 48 square inches, without adhesive border, each dressing (continued) Gauze, impregnated, water or normal saline, pad size 16 square inches or less, without adhesive border, each dressing Gauze, impregnated, water or normal saline, pad size more than 16 square inches but less than or equal to 48 square inches, without adhesive border, each dressing Gauze, impregnated, water or normal saline, pad size more than 48 square inches, without adhesive border, each dressing Hydrocolloid dressing, wound cover, pad size 16 square inches or less, without adhesive border, each dressing Hydrocolloid dressing, wound cover, pad size more than 16 square inches but less than or equal to 48 square inches, without adhesive border, each dressing Hydrocolloid dressing, wound cover, pad size more than 48 square inches, without adhesive border, each dressing Hydrocolloid dressing, wound cover, pad size 16 square inches or less, with any size adhesive border, each dressing Hydrocolloid dressing, wound cover, pad size more than 16 square inches but less than or equal to 48 square inches, with any size adhesive border, each dressing Hydrocolloid dressing, wound cover, pad size more than 48 square inches, with any size adhesive border, each dressing Hydrocolloid dressing, wound filler, paste, per fluid ounce Hydrocolloid dressing, wound filler, dry form, per pram Hydrogel dressing, wound cover, pad size 16 square inches or less, without adhesive border, each dressing Hydrogel dressing, wound cover, pad size more than 16 square inches but less than or equal to 48 square inches, without adhesive border, each dressing (continued) Hydrogel dressing, wound cover, pad size more than 48 square inches, without adhesive border, each dressing Hydrogel dressing, wound cover, pad size 16 square inches or less, with any size adhesive border, each dressing Hydrogel dressing, wound cover, pad size more than 16 square inches but less than or equal to 48 square inches, with any size adhesive border, each dressing Hydrogel dressing, wound cover, pad size more than 48 square inches, with any size adhesive border, each dressing Hydrogel dressing, wound filler, paste, per fluid ounce Hydrogel dressing, wound filler, dry form, per gram Specialty absorptive dressing, wound cover, pad size 16 square inches or less, without adhesive border, each dressing Specialty absorptive dressing, wound cover, pad size more than 16 square inches but less than or equal to 48 square inches, without adhesive border, each dressing Specialty absorptive dressing, wound cover, pad size more than 48 square inches, without adhesive border, each dressing Specialty absorptive dressing, wound cover, pad size 16 square inches or less, with any size adhesive border, each dressing Specialty absorptive dressing, wound cover, pad size more than 16 square inches but less than or equal to 48 square inches, with any size adhesive border, each dressing Specialty absorptive dressing, wound cover, pad size more than 48 square inches, with any size adhesive border, each dressing Transparent film, 16 square inches or less, each dressing (continued) To estimate total 1995 surgical dressings expenditures, we multiplied the number of surgical dressings purchased by the regional carriers in 1994 by the 1995 fee schedule prices and the other comparison prices. For each category of surgical dressing identified by HCFA Common Procedure Codes (HCPC), we obtained the total units of surgical dressings purchased by all four regional carriers from the regional carrier responsible for compiling and analyzing DME claim data for all four regional carriers. We used this information in conjunction with surgical dressing pricing data to make several pricing comparisons. For all comparisons, estimated expenditures under HCFA’s surgical dressing fee schedule were calculated by multiplying the number of units purchased in 1994 by the 1995 fee schedule price for that code. These calculations were done for each HCPC and then totaled to get overall expenditures. Table V.1 illustrates our comparison of fee schedule prices with wholesale prices. It ranks the categories of surgical dressings from the category in which the fee schedule is the furthest above the wholesale dressing price to the category in which the fee schedule is the furthest below the wholesale price. We obtained wholesale pricing information from a national medical supplier’s 1994-1995 mail order catalog. We identified a dressing in 44 of the HCPC surgical dressing categories. We calculated a per dressing, or unit, price for each of the 44 categories by taking the best wholesale price and dividing it by the number of dressings, or units, that would be provided at that price. The prices for each category were multiplied by the number of units of dressings purchased in that category in 1994 to get total expenditures in each category. We used these data to determine what total expenditures would be if HCFA paid wholesale prices rather than the fee schedule prices. As table V.1 indicates, HCFA would pay almost $20 million less for surgical dressings in the 44 categories if it paid the lower wholesale prices. Potential savings from lowest wholesale purchases (continued) Potential savings from lowest wholesale purchases (15,566) (9,042) (35,419) (201,759) (44,497) (1,044,294) (23,997) (3,806,405) (1,247,052) (1,745,932) Before 1995, tape was recorded as HCPC A4454 and the unit of purchase was a roll of tape. We used the pricing and utilization data for HCPC A4454 to estimate 1995 expenditures. Table V.2 illustrates our comparison of fee schedule prices with the lowest available retail prices. The table ranks the categories of surgical dressings from the dressing category in which the fee schedule is the furthest above the lowest retail dressing price to the category in which the fee schedule is the furthest below the lowest retail price. We used the surgical dressing price lists HCFA developed to establish the surgical dressing fee schedule prices. HCFA had a price list for 44 surgical dressing categories with prices stated at the 1992 base year price. We identified the lowest price dressing in each of the 44 surgical dressing categories and inflated the prices to 1995 levels using the inflation factors established by the Congress. We then multiplied the lowest retail prices for each category by the number of units purchased in those categories in 1994. We totaled the expenditures in all categories and compared this figure with what HCFA would pay using the 1995 fee schedule. As table V.2 illustrates, HCFA would pay over $22 million less for surgical dressings in the 44 categories if it paid the lowest retail price. Potential savings from lowest retail purchases (continued) Potential savings from lowest retail purchases (11,036) Table V.3 illustrates our comparison of fee schedule prices with the lowest retail drugstore prices for similar dressings. The table ranks the categories of surgical dressings from the category in which the fee schedule is the furthest above the lowest retail drugstore dressing price to the category in which the fee schedule is the furthest below the lowest retail drugstore price. We obtained the actual drugstore prices by visiting and pricing surgical dressings at four retail drugstores in the Los Angeles area. We identified and priced dressings in nine of the surgical dressing categories and determined the lowest per dressing price in each of the nine dressing categories. These figures were then multiplied by the number of units purchased in those categories in 1994. We totaled the expenditures in each category and compared this figure with what HCFA would pay using the 1995 fee schedule. As the table illustrates, HCFA would pay over $2 million less for surgical dressings in the nine categories if it paid the lower drugstore prices. Potential savings from actual retail purchases (19,186) (194,804) (2,379,003) (243,385) Before 1995, tape was recorded as HCPC A4454 and the unit of purchase was a roll of tape. However, in 1995 a new HCPC (K0265) and description of tape were developed. We used the pricing and utilization data for HCPC A4454 to estimate 1995 expenditures. Table V.4 illustrates our comparison of fee schedule prices with the price VA pays for similar dressings. The table ranks the categories of surgical dressings from the category in which the fee schedule is the furthest above the VA price to the category in which the fee schedule is the furthest below the VA price. We obtained surgical dressing supply and price lists from one of the VA’s Medical Centers in the Los Angeles area. We identified dressings and calculated per dressing, or unit, prices in nine of the surgical dressing categories. We multiplied the lowest per dressing price in each category by the number of units purchased in those categories in 1994. We totaled the expenditures in each category and compared this figure with what HCFA would pay using the 1995 fee schedule. As table V.4 illustrates, HCFA would pay over $9 million less for dressings in the nine categories if it paid VA’s lower prices. Before 1995, tape was recorded as HCPC A4454 and the unit of purchase was a roll of tape. However, in 1995 a new HCPC (K0265) and description of tape were developed. We used the pricing and utilization data for HCPC A4454 to estimate 1995 expenditures. Edwin P. Stropko, Assistant Director, (202) 512-7118 Donald J. Walthall, Assignment Manager Sam Mattes, Evaluator-in-Charge Timothy S. Bushfield, Evaluator Craig H. Winslow, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed Medicare payments for medical supplies, focusing on the: (1) circumstances surrounding payments for unusually high surgical dressing claims; and (2) adequacy of Medicare's internal controls to prevent paying such claims. GAO found that: (1) although the Health Care Financing Administration (HCFA) has eliminated medical suppliers' ability to select contractors with the highest payment rates, unwarranted expenditures still persist; (2) reasons for the unwarranted expenditures include inadequate systematic payment controls and noncompetitive payment rates for surgical dressings; (3) many Medicare contractors lack itemized bills, fail to automatically review high-dollar claims for newly covered surgical dressings, and lack a systematic method for detecting duplicate bills submitted to different types of Medicare carriers; (4) Medicare payment rates for new surgical dressings and other medical supplies are considerably higher than wholesale and retail prices; (5) HCFA could curtail these overpayments by establishing procedures to require itemized claims, preventing duplicate payments to Medicare suppliers, and identifying high-dollar, high-volume claims that should be reviewed before payment; (6) the initiative will provide comprehensive national payment data that will allow HCFA and its contractors to detect inappropriate billing and overutilization; and (7) HCFA needs legislative authority to set competitive payment rates that are favorable for high-volume purchasers.
The CWC is a multilateral arms control treaty that bans the development, production, stockpiling, transfer, and use of chemical weapons by member countries and requires the declaration and destruction of those countries’ existing chemical weapons stocks and production facilities by 2007, with a possible extension to 2012. The CWC also monitors the production and transfer of chemicals at declared commercial facilities. When the CWC entered into force in April 1997, there were 87 member states. As of March 2004, 161 nations are CWC member states, including Libya. Twenty-one countries are signatories but have yet to ratify the treaty. According to the State Department, key nonsignatory states include North Korea and Syria, which are believed to possess or are actively pursuing chemical weapons capabilities. Upon ratification of the CWC, all member states are required to adopt national laws that criminalize CWC-prohibited activities and establish a national authority to serve as the national focal point for liaison with the OPCW. All members are required to submit initial declarations to the OPCW no later than 30 days after entering into the convention and annual declarations detailing transfer activities of all declared chemicals no later than 90 days after the end of the year. Member states must also declare chemical weapons stockpiles and production facilities, relevant chemical industry facilities, and other related information such as chemical exports and imports. Member states that possess chemical weapons stockpiles and production facilities must destroy them by April 2007. Six member states— Albania, India, Libya, Russia, the United States, and A State Partyhave declared their chemical weapons stockpiles and are considered possessor states. Eleven member states have declared chemical weapons production facilities. The OPCW consists of three organsthe Conference of States Parties, the Executive Council, and the Technical Secretariatand was established by the convention to implement its provisions. The Technical Secretariat manages the organization’s daily operations, including the implementation of the convention’s verification measures. The Technical Secretariat serves as the repository for all member states’ declarations and relies upon individual member states to submit accurate, timely, and complete declarations. Based on these declarations, the Technical Secretariat inspects and/or monitors member states’ military and commercial chemical facilities and activities to ensure their compliance with the CWC. Also, if a member state suspects another member state of conducting activities prohibited by the convention, it may request a challenge inspection of the suspected site(s). As of December 2003, no member state has requested the OPCW to conduct a challenge inspection. Technical Secretariat inspectors take inventories of the declared stockpiles to verify the accuracy of the declarations and ensure that chemical weapons are not removed. Inspectors continuously monitor the destruction of chemical weapons at operating destruction facilities by observing the receipt of chemical weapons at sites and checking the type and quantity of chemical weapons destroyed. Inspectors also verify the destruction or conversion of declared chemical weapons production facilities by observing the destruction of applicable buildings and production equipment. So that dual-use chemicals are not diverted from their peaceful uses, the Technical Secretariat inspects declared commercial production facilities based on three schedules, or lists of chemicals, contained in the CWC. Commercial facilities that produce discrete organic chemicals, above 200 metric tons, are also subject to inspections. OPCW inspectors verify that the types of chemicals being produced are consistent with the member states’ declarations. Funding for OPCW inspections and other operations comes primarily from the 161 member states’ required annual contributions, which are based on the United Nations’ scale of assessments. The other major source of funding comes from reimbursements of inspections costs paid by chemical weapons possessor states. The OPCW is partially reimbursed for inspection costs incurred while conducting inspections at declared chemical weapons facilities in those countries. The organization, however, must fund inspections at commercial facilities and any challenge inspections it conducts. The organization’s budget for calendar year 2004 is $82.6 million. Although the CWC has helped to reduce the risks from chemical weapons, member states are experiencing delays in destroying their chemical weapons and implementing key requirements of the treaty. For example, Russia and the United States are unlikely to destroy their declared chemical weapons by the extended deadline of 2012, and many member states have not adopted national laws that fully implement the CWC. In addition, some member states have yet to provide the OPCW with complete and timely declarations detailing their CWC-related activities. We estimate that the United States and Russia are unlikely to meet the 2012 extended CWC deadline for destroying their chemical weapons. Three other possessor states—Albania, India, and A State Party—possess smaller stockpiles and are expected to destroy their stockpiles by the original April 2007 deadline (see table 1). In addition, Libya became the sixth possessor state in February 2004 when it became a member of the CWC and declared that it possessed chemical weapons. According to OPCW officials and CWC possessor states, the destruction of chemical weapons has proven more complex, costly, and time consuming than originally anticipated. Russia currently possesses the world’s largest declared chemical weapons stockpile at 40,000 metric tons stored at seven sites, as shown in figure 1. The stockpile includes 32,500 metric tons of nerve agent, the most toxic of all known chemical agents, and 7,500 metric tons of blister agent. As we have previously reported, DOD has installed security upgrades at Shchuch’ye and Kizner, the two sites with portable nerve agent munitions. However, a large quantity of Russia’s chemical weapons will remain vulnerable to theft or diversion until they are destroyed. As of September 2003, Russia had destroyed 1.1 percent of its total CWC-declared stockpile. Russia did not meet the original treaty deadline to destroy 1 percent of its stockpile by April 2000. In accordance with treaty provisions, Russia requested and received an extension of its 1-percent and 20-percent deadlines from the OPCW. In April 2003, Russia met the one percent destruction deadline. Based on information provided by DOD, we estimate that Russia may not destroy its declared chemical weapons stockpile until 2027. Our analysis is predicated on Russia’s complete destruction of its approximately 7,500 metric tons of blister agent by the 2007 deadline and destroying the remaining 32,500 metric tons of nerve agent at the U.S. funded destruction facility at Shchuch’ye. In September 2003, Russia agreed to complete the elimination of all of its nerve agent at the Shchuch’ye destruction facility, which is scheduled to begin operations in 2008. According to DOD, the Shchuch’ye facility may not be operational until 2009. For Russia to meet an extended April 2012 deadline, Russia would have to destroy about 9,100 metric tons of nerve agent per year. Operating at maximum capacity, the facility is estimated to destroy about 1,700 metric tons of nerve agent per year. At that rate, unless the capacity for destruction is increased or additional destruction facilities are built, the complete destruction of Russia’s stockpile may not occur until 2027. (We discuss other options for destroying Russia’s nerve agent stockpile later in this report.) The United States possesses the second largest declared chemical weapons stockpile with 27,771 metric tons, which is stored at eight sites, as shown in figure 2. Currently, the United States is operating three destruction facilities; three additional facilities will be operational in the near future and two more will begin construction. As of December 2003, the United States destroyed 24 percent of its declared stockpile and met the 1-percent and 20-percent interim deadlines within the treaty time frames. However, the United States requested and received an extension of the 45-percent deadline from April 2004 to December 2007. The United States will not meet the 100-percent April 2007 destruction deadline and may not meet the 2012 deadline, if extended, based on the current schedule. According to DOD, one U.S. chemical weapons destruction facility is not scheduled to complete its destruction operation until 2014. Persistent delays have occurred due to plant safety issues, environmental requirements, and funding shortfalls. We have previously reported on the significant management challenges in the U.S. chemical demilitarization program, as well as concerns over cost growth and schedule delays. As noted in our prior work, the U.S. chemical weapons demilitarization program spent $11.4 billion by the end of fiscal year 2003, which accounts for nearly half of the program’s life-cycle cost estimate of $24 billion. Ind. Colo. Ky. Md. Ark. Ala. Three other possessor statesAlbania, India, and A State Partyaccount for about 3 percent of the global declared chemical weapons stockpile and are anticipated to meet the CWC complete destruction time line by April 2007. With smaller stockpiles than those in Russia and the United States, these countries have had less difficulty meeting their deadlines. Albania declared its stockpile to the OPCW in 2003, and the United States is providing assistance to destroy its chemical weapons stockpile. Other nations, including Canada and Italy, may also provide assistance. State officials estimate that Albania will meet the 2007 destruction deadline. According to Indian officials, India has the third largest stockpile after Russia and the United States; however, information on its chemical weapons destruction program is not publicly available. The fifth possessor state, A State Party, experienced interim delays due to technical difficulties. It requested and received an extension of its 45-percent chemical weapons destruction deadline in 2003. According to government officials, it remains on track to meet the 2007 deadline. Libya, the sixth possessor state, has just declared its chemical weapons to the OPCW and has yet to develop a destruction plan for its stockpile. According to the OPCW, less than 40 percent of CWC member states have adopted national laws to criminalize CWC-prohibited activities. Although the treaty does not establish a time line for the adoption of such measures, according to the OPCW, member states are expected to implement these laws soon after ratifying the convention. OPCW officials stated that many member states lack sufficient legal expertise and financial resources to adopt the required laws. At the 2003 CWC Review Conference, however, the United States launched an initiative to assist all CWC member states in adopting comprehensive national laws. The effort culminated in an OPCW action plan to help member states adopt necessary laws by 2005. According to the OPCW, 126 member states have designated a national authority to collect and submit their declarations. However, OPCW and State officials estimate that a large number of member states’ national authorities are not effective because they lack sufficient financial and human resources. National authorities are important in implementing the treaty because they facilitate member states’ ability to submit accurate and timely declarations to the OPCW and host OPCW inspections. To encourage member states to improve the effectiveness of their national authorities, the OPCW hosts workshops to identify common problems and assist member states in addressing them accordingly. According to a 2001 Department of State report, four CWC member statesChina, Iran, Sudan, and Russiahad not acknowledged the full extent of their chemical weapons program. The CWC requires member states to fully and accurately declare their chemical weapons capabilities. However, State believes that China maintains an active chemical weapons research and development program, a possible undeclared chemical weapons stockpile, and weapons-related facilities that were not declared to the OPCW. Iran has not submitted a complete and accurate declaration and is seeking to retain and modernize key elements of its chemical weapons program, according to the report. Sudan established a research and development program with a goal to produce chemical weapons indigenously. The report also assesses that Russia has not divulged the full extent of its chemical agent and weapons inventory. State views Russia’s declaration of its chemical weapons production, development facilities, and chemical agent and weapons stockpiles as incomplete. In addition, State reported that Russia may have knowledge of a new generation of agents that could circumvent the CWC and possibly defeat western detection and protection measures. The significance of this issue was addressed at the 2003 CWC Review Conference. The Director-General of the OPCW urged member states to provide accurate and complete declarations to increase transparency and confidence in the treaty. Furthermore, member states have been late in submitting their required initial and annual declarations to the OPCW. As of December 2002, nearly 97 percent of all member states submitted their initial declarations, but a large percentage of member states did not submit their initial declarations within the required 30-day time frame. The OPCW also engaged in bilateral consultations to assist member states in submitting their initial declarations. As of October 2003, nearly one-third of member states had failed to submit their annual declarations in a timely manner. According to the OPCW, delays in submitting the required declarations make it difficult for the organization to plan its annual inspections and track chemical transfers. The OPCW has established a credible inspections regime. Between 1997 and 2003, the OPCW conducted nearly 1,600 inspections in 58 member states. However, the organization faces significant challenges as it prepares to balance an increased number of inspections at both military and commercial facilities with its limited resources. The CWC does not specify the number of annual inspections that the OPCW is required to conduct. Since April 1997, more than half of OPCW inspections have taken place at military facilities even though some commercial facilities may pose a greater proliferation threat. To meet the increased demands on its limited resources, the OPCW is working with member states to further improve the efficiency of its inspection activities. From April 1997 through December 2003, the OPCW’s Technical Secretariat has conducted nearly 1,600 inspections at both military and commercial chemical facilities in 58 member states. (See app. II for a chart depicting the locations of inspections conducted.) According to OPCW officials and member states’ representatives we interviewed, inspections are proceeding as planned under the CWC. Within the United States, officials from the State, DOD, and Commerce, as well as chemical industry representatives, stated that the United States and OPCW inspectors work cooperatively to implement the inspection regime. When questions or concerns arise, the Technical Secretariat and the affected member state(s) work to resolve them. For example, the United States and the OPCW have resolved issues such as clarifying which portions of declared commercial facilities are subject to inspection. According to DOD, OPCW inspectors have good access to declared sites and facilities. As of December 2003, the Technical Secretariat conducted 965 inspections at 167 of 190 declared military sites. The military sites that have not been inspected are either chemical weapons production facilities destroyed prior to CWC entry into force or sites having old or abandoned chemical weapons. Although the CWC requires that OPCW maintain a continuous presence at member states’ sites when chemical weapons are being destroyed, it does not specify how many inspections are to be conducted annually. The Technical Secretariat determines how many inspections to conduct annually based on the number of military facilities declared by member states, member states’ annual destruction plans, annual declarations, and the annual OPCW budget documents. The greatest number of inspections has taken place at chemical weapons destruction facilitiesprimarily in the United States, Russia, and India. About one-third of all inspections conducted by the Technical Secretariat have taken place in the United States, mostly at chemical weapons destruction facilities. Table 2 shows the number of inspections conducted at different types of facilities at military sites from April 1997 through December 2003. Between April 1997 and December 2003, Technical Secretariat officials conducted 634 inspections at 514 sites among the 5,460 commercial facilities declared by member states (see table 3). Because the CWC does not specify the specific number of inspections to be conducted each year, the Technical Secretariat selects the facilities it will inspect based on those requiring initial inspections and the potential proliferation risk of facilities. The annual budget document specifies the number of inspections to be conducted. Since April 1997, most OPCW commercial inspections have taken place at facilities that produce chemicals listed on the CWC’s three schedules. Of the declared 4,492 facilities that produce discrete organic chemicals (DOC), the organization has inspected 163. DOC facilities produce a wide range of common commercial chemicals and may also be capable of producing chemical weapons. According to U.S. government and OPCW officials, such dual-use DOC facilities may pose a proliferation threat because they may conceal CWC-prohibited activities. Most significantly, these DOC facilities may be modified to produce scheduled and other chemicals that are not specifically listed on current CWC schedules but are still banned by the CWC, if intended for prohibited purposes. In commenting on a draft of our report, the OPCW provided clarification of this proliferation issue. While the majority of commercial facilities produce discrete organic chemicals, the OPCW estimates that less than 20 percent of these DOC sites may pose highly relevant proliferation risks. Although the OPCW has made progress in conducting inspections as mandated by the convention, it faces challenges in meeting an increase in its inspection workload. As possessor states’ destruction activities increase over the next few years, the OPCW will have to maintain a continuous inspection presence at more facilities. Concurrently, the OPCW wants to increase the number of inspections it conducts at commercial DOC facilities to address proliferation concerns. However, the OPCW has experienced financial difficulties over the past few years. To better meet the increased demand on its resources, the OPCW is working with member states to find more efficient and cost-effective means of conducting its inspection activities. The OPCW projects that the number of chemical weapons destruction facilities that will require monitoring will increase from seven to nine by 2007. Under the CWC, OPCW inspectors must maintain a continuous onsite presence at chemical weapons destruction facilities to monitor and verify the destruction of chemical weapons stockpiles. According to OPCW officials, the organization is reimbursed for about two-thirds of the expenses it incurs during such inspections. OPCW inspection costs will increase, if the organization maintains a continuous on-site presence at the additional chemical weapons destruction sites that will begin operations in the near future. However, the Technical Secretariat and member states are currently discussing possible monitoring alternatives that may reduce costs without compromising the credibility of the inspections. According to the OPCW, the organization is working to increase the number of inspections it conducts at commercial DOC facilities to address the proliferation risks they pose. In 2002, for example, 32 of 85 commercial inspections conducted were at DOC facilities. In 2004, the OPCW plans to increase the number of DOC facility inspections to 70 out of a total of 150 inspections planned at commercial facilities. Furthermore, OPCW and member states are working to refine the current criteria used to select DOC facilities for inspections to ensure that the selection process takes into account all factors mandated by the CWC. Due to budget deficits in 2001 and 2002, the Technical Secretariat had to reduce the number of inspections it planned to conduct at commercial chemical facilities. Such deficits were mostly the result of member states’ late payment of their annual assessments and reimbursements for military inspections. When funding was limited, the OPCW could not reduce the number of inspections at destruction facilities because inspectors are required to continuously monitor these sites when operational. Instead, it reduced the number of commercial inspections it conducted. In 2001, the OPCW conducted 57 percent (75 of 132) of its planned inspections at commercial sites. For 2002, it conducted 64 percent (85 of 132) of its planned inspections. Although previous financial difficulties caused a reduction in the number of inspections, the Technical Secretariat completed its planned number of 132 commercial inspections for 2003. Member states approved a more than 6-percent increase in the OPCW’s budget for 2004. According to OPCW officials, such budget increases are unlikely to continue in future years, and the problem of late receipt of member states’ annual assessments and reimbursements will likely reoccur. To meet the increased demand for inspections, the Technical Secretariat is working to improve the efficiency of its inspection activities. The organization has reduced the size of inspection teams at military sites, thereby lowering daily allowance and travel costs. For example, the team size for most inspections conducted at chemical weapons storage facilities was reduced from eight in 2002 to six in 2003. The Technical Secretariat has also devised new contracts for inspectors of chemical weapons destruction facilities that permit hiring part-time inspectors for 1 year. When implemented, such contracts could reduce staff costs and provide for more flexibility in assigning inspection teams. The OPCW and member states are also exploring greater use of monitoring and recording instruments at chemical weapons destruction facilities to reduce the number of inspectors needed on-site. Cost-saving measures have also been proposed and implemented to increase the efficiency of inspections conducted at commercial facilities, including reducing the size of inspection teams and the time they spend on-site. Russia is experiencing delays in destroying its chemical weapons. As of September 2003, Russia had destroyed 1.1 percent of its 40,000 metric tons of chemical weapons at its only operational destruction facility. Russian destruction efforts have also relied almost entirely on international assistance. As of December 2003, international donors have shipped about $585 million and committed more than $1.7 billion to Russian destruction efforts. According to State, from 2001 through 2003 Russia budgeted about $420 million for chemical weapons demilitarization-related activities. Russia spent about $95 million. However, based on its current destruction efforts and the international assistance committed, Russia will not meet the extended CWC destruction deadline of 2012. Furthermore, Russia has yet to develop a comprehensive destruction plan that includes the types of projects and funding needed to completely destroy its declared stockpile, which may further delay destruction efforts. Russia plans to destroy its chemical weapons stockpiles at Gorny, Kambarka, and Shchuch’ye, primarily using assistance provided by Germany and the United States. Russia has yet to develop a credible plan to destroy the remaining 50-percent of its chemical weapons stockpile stored at Maradykovsky, Leonidovka, and Pochep. Table 4 provides the time line for Russia’s destruction efforts at facilities in operation or under construction. Russia is relying on German assistance to destroy its stockpile of blister agent at the Gorny and Kambarka facilities. According to DOD, Germany focused its assistance in this area because it had experience destroying World War II blister agents. As of September 2003, Russia destroyed 455 metric tons of blister agent (1.1 percent of the Russian stockpile) stored at the Gorny facility. Russia will destroy the remaining stockpile at Gorny by December 2005, according to a German official. Russia constructed the building for the destruction facility, while Germany spent about $58 million from 1993 to 2003 to equip the facility. Germany has committed $120 million for the Kambarka destruction facility, currently under construction, and up to $300 million in additional funds, according to a German government official. The facility at Kambarka will destroy the entire stockpile of blister agent located there by December 2009. The construction schedule of this facility may be delayed, according to a German government official overseeing the assistance. Once operational, the Shchuch’ye chemical weapons destruction facility will begin to destroy nerve agent from two Russian storage sites that house nearly 30 percent of the total Russian stockpile. The storage facilities at Kizner and Shchuch’ye each house about 5,500 metric tons of nerve agent stored in projectiles and rockets. According to DOD and State officials, the United States has focused its assistance to Russia at Shchuch’ye because these chemical weapons are portable and thus vulnerable to theft and diversion. The United States has agreed to pay for the destruction facility at Shchuch’ye. The facility is scheduled to destroy the nerve agent stockpiles located at both the Shchuch’ye and Kizner storage sites. DOD’s Cooperative Threat Reduction program has obligated more than $460 million on planning, design, and construction of the facility. In October 2003, DOD updated the costs and schedule for completing the Shchuch’ye facility and projected that the cost would increase from about $888 million to more than $1 billion. DOD also noted that the operation of the facility may be delayed from September 2008 to July 2009. DOD attributes the increased cost to changed site conditions, new requirements, risk factors, and delays due to lack of U.S. funding for 2 years caused by Russia’s inability to meet U.S. congressional conditions. Once operational, the facility is estimated to destroy 1,700 metric tons of chemical weapons per year. With a July 2009 operational date, we estimate that the destruction of chemical weapons stored at Shchuch’ye and Kizner will not be completed until at least 2016. (For more detailed information on international assistance for chemical weapons destruction at Shchuch’ye, see app. III.) In November 2003, the Director of the Russian Munitions Agency informed us that Russia has not yet decided how it will destroy the remaining nerve agent stored at Maradykovsky, Leonidovka, and Pochep. This nerve agent represents over 50 percent of the total Russian chemical weapons stockpile. In September 2003, the United States and Russia amended a March 2003 agreement under which the Russian Munitions Agency agreed to complete the elimination of all nerve agent at the Shchuch’ye destruction facility, unless otherwise agreed in writing. According to DOD and Russian government officials, there is uncertainty whether Russia will comply. Russian officials have concerns about the costs and risks of transporting the weapons from these sites to Shchuch’ye, most of which are located more than 500 miles away. As a result, Russian officials have indicated that Russia may construct three chemical weapons neutralization facilities for the nerve agent stored at Maradykovsky, Leonidovka, and Pochep. Under this option, Russia would neutralize the chemical weapons at the three sites so the agent would be safe for transport, and then complete the destruction process at Shchuch’ye. This would require the construction of three neutralization facilities plus new destruction capacity at Shchuch’ye, because the neutralized agent would likely be destroyed using a different process than the unneutralized agent from the Shchuch'ye and Kizner sites. In November 2003, however, Italy agreed to commit funding for the construction of a destruction facility at Pochep. While Germany and the United States have obligated about $515 million and committed an additional $1 billion for Gorny, Kambarka, and Shchuch’ye, other donors have spent about $70 million at these sites. Furthermore, in June 2002, the Group of Eight launched the Global Partnership initiative, which was designed to prevent the proliferation of weapons of mass destruction to terrorists and their supporters. Among other projects in Russia, the initiative is currently assisting with chemical weapons destruction. As of December 2003, international donors, including the United States, Germany, Canada, Italy, and the United Kingdom, have committed more than $1.7 billion for Russian chemical weapons destruction. Congress has conditioned U.S. funding for the Shchuch’ye facility on a Secretary of Defense certification that Russia has developed a practical chemical weapons destruction plan. In September 2003, Russia signed an agreement with the United States to provide a chemical weapons destruction plan by March 2004. The plan would include the types of projects and funding needed to completely destroy its declared chemical weapons. Officials from State and DOD were not optimistic that the Russians will deliver a plan within the required time. According to State and DOD officials, Russia’s planning efforts to date have been based on inaccurate assumptions and have lacked detailed information on how the destruction of chemical weapons will occur at each site. For example, Russian officials have stated that they expect the Shchuch’ye chemical weapons destruction facility to be operational in 2006, despite DOD estimates that it may take until July 2009. DOD officials stated that additional time is needed to procure and install the equipment needed for the destruction facility. In addition, Russia’s plans need greater specificity. Russia has provided some information to the United States regarding the following: where the chemical weapons will be destroyed, when they will be destroyed and the amounts at each location, costs for each facility, and how each facility will contribute to the destruction efforts. According to officials from State and DOD, the information provided does not appear credible and lacks key elements. Russia has not provided the method, schedule, and cost for transporting its chemical weapons to the destruction facility at Shchuch’ye. In addition, Russia has no credible plan to destroy the nerve agent at Maradykovsky, Leonidovka, and Pochep. Russian officials indicated that the nerve agent may be neutralized at each site but did not provide any details regarding what would be needed to undertake such an effort, including a plan to dispose of the toxic chemicals resulting from the neutralization process. Russia’s chemical weapons destruction efforts at Pochep, Leonidovka, and Maradovski may be further complicated by Russia’s definition of destruction, which differs from that of the United States and the OPCW. The CWC defines destruction of chemical weapons as an essentially irreversible process. The United States and the OPCW maintain that chemical weapons are not destroyed until the materials resulting from the destruction process are essentially irreversible (i.e., can no longer be reversed back to chemical weapons) and the remaining materials can be inspected by the OPCW. The United States neutralizes some of its chemical weapons in a two-phase process that first neutralizes the agent and then transports the resulting hazardous waste to a commercial chemical facility for final disposition. The OPCW inspects both phases of the neutralization process. Russian officials maintain that chemical weapons should be considered destroyed after the initial neutralization phase and not require further processing or OPCW inspections. Russian officials argue that, although toxic chemicals resulting from the neutralization process could be reverted to chemical weapons, the cost to do so would be prohibitive. Russia raised this issue at the May 2003 CWC Review Conference, but OPCW member states maintained that complete destruction should be an essentially irreversible process as specified in the CWC. Despite this opposition, Russian government officials at the Russian Munitions Agency and the Ministry of Foreign Affairs stated in November 2003 that they consider initial neutralization equivalent to destruction. The CWC has played an important role in reducing the risks from chemical weapons. Member states have destroyed more than 7,700 metric tons of chemical weapons and the OPCW has established a credible inspection regime that has inspected many military and commercial chemical facilities in 58 countries. Nearly 7 years after entry into force, the CWC’s nonproliferation goals have proven more difficult to achieve than originally anticipated. CWC member states and the OPCW face difficult choices in addressing the delays in Russia’s destruction program, the limited number of inspections at dual-use commercial sites, and the slow progress in passing laws criminalizing CWC-prohibited activities. Decision-makers will have to make some combination of policy changes in these areas if the CWC is to continue to credibly address nonproliferation concerns worldwide. First, the destruction of chemical weapons will likely take longer and cost more than originally anticipated. Even with significant international assistance, Russia may not destroy its declared chemical weapons until 15 years beyond the extended CWC deadline. Russia’s large stockpile will thus remain vulnerable to theft and diversion. Several options exist, however, for the United States and other donors to reduce the proliferation risks from Russia’s chemical weapons stockpile. Such options may include (1) increasing funding for security improvements at Russia’s chemical weapons storage sites, (2) deferring financing for Russia’s chemical weapons destruction effort until the Russian government develops a credible destruction plan, or (3) financing the construction of additional destruction facilities. Second, technical advancements in the chemical industry and the increasing number of dual-use commercial facilities worldwide challenge the CWC and the OPCW’s ability to deter and detect proliferation. Member states will need to determine the best policies for addressing potential proliferation at dual-use commercial facilities. CWC member states could decide that the OPCW should conduct more commercial inspections, which would require member states to provide more funding and subject their national chemical industries to additional inspections. Alternatively, member states may determine that the current level of commercial inspections is sufficient to detect and deter activities prohibited by the CWC. Third, many member states have not yet adopted national laws to fully implement the convention, or have not submitted complete and accurate declarations of their CWC-related activities. These problems undermine confidence in overall treaty compliance. It is important for the OPCW and member states to reinforce member states’ obligations to adopt national laws, enforce them accordingly, and submit accurate and timely declarations. Challenge inspections may also be a vehicle to ensure member states’ compliance with the CWC. We obtained written comments on a draft of this report from State, DOD, Commerce, and the OPCW, which are reprinted in appendixes IV, V, VI, and VII respectively. We also received technical comments from the departments as well as the OPCW, which we have incorporated where appropriate. In commenting on our draft report, State asserted that our report was misleading, incomplete, and not balanced. State did not provide specific examples but instead claimed that the report omitted positive CWC accomplishments such as growth in the number of member states, correction of OPCW management inefficiencies, and OPCW execution of the CWC inspection regime. In response, we agree that the CWC has played an important role in reducing the threat posed by chemical weapons and the report acknowledges this accomplishment. With regard to State’s comment about the growth in the number of CWC member states, the report focuses on CWC implementation among already existing member states. For clarification however, we have provided additional information on the increase in CWC membership since entry into force. Secondly, State commented that the report did not assess OPCW management corrections. In this report we reviewed OPCW’s efforts to conduct inspections, not the management of the organization. We had previously reported on this topic in October 2002. Thirdly, the report clearly articulates that the OPCW has established a credible inspection regime and has conducted nearly 1,600 inspections in 58 member states. While this report discusses several important delays in CWC implementation, it still acknowledges that the CWC and OPCW have made important contributions to addressing the threat posed by chemical weapons. DOD commented that our draft report had little analysis of the relative degree of proliferation risk from those member states lacking implementing legislation. DOD, however, does not offer what criteria one would use to make a determination about which member states are more important to CWC implementation. As stated in the report, the CWC requires all member states to adopt national implementing legislation. In addition, DOD believes that the report is not conducive to providing a balanced perspective because it does not acknowledge successes in implementing the CWC. For example, DOD cites that progress has been made in eliminating former chemical weapons production facilities and destroying category 2 and 3 chemical weapons related munitions. Such successes, however, remain secondary to the CWC’s primary goal of destroying actual chemical weapons. As stated in this report, the CWC is the only multilateral treaty that seeks to eliminate an entire category of weapons of mass destruction under an established time frame and verify their destruction through inspections. DOD also asserts that the report does not recognize the significant changes occurring within the OPCW. As mentioned previously, this report does not assess OPCW functions or performance because we conducted such a review of the OPCW in October 2002. This report does, however, credit the OPCW with finding more efficient and cost-effective means of conducting its inspection activities as it faces the challenge of meeting an increased inspection workload. We have included additional information in this report to further clarify the achievements of the CWC and the OPCW. Both DOD and State commented that our analysis estimating that Russia may not destroy its chemical weapons stockpile until 2027 was misleading. We have clarified our presentation of this analysis to include a discussion of other options being considered for destroying Russia’s stockpile. As of March 2004, only one facility capable of destroying nerve agent is being constructed in Russia. Although plans to build additional facilities are being discussed, we note that construction of the U.S-funded facility at Shchuch’ye began 11 years after the U.S. and Russia first agreed to build it. Commerce commended the report for focusing attention on the important issue of member states’ achieving compliance with the CWC. The department noted that the U.S. government has taken a leading role at the OPCW in promoting an action plan to ensure all member states’ adoption of national law implementing the CWC and is providing assistance to member states to achieve this goal. The OPCW commended the draft report for reflecting what has been achieved through CWC implementation and recognizing areas where challenges still exist. It noted, however, that some statements as presented in the report do not reflect the views of the Technical Secretariat. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after the date of this letter. At that time we will send copies of this report to the Secretaries of State, DOD, and Commerce; the Director-General of the OPCW; and other interested congressional committees. We will also make copies available to others upon request. In addition, this report will be available free of charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-8979 if you or your staff have any questions concerning this report. Another GAO contact and staff acknowledgments are listed in appendix VIII. To determine what efforts member states have made in meeting key Chemical Weapons Convention (CWC) requirements, we compared these requirements with documents obtained from the Organization for the Prohibition of Chemical Weapons (OPCW) and the Department of State (State), including annual reports that assess member states’ compliance with the treaty, surveys assessing the status of member states’ compliance with key requirements, and member states’ official statements to the 2003 CWC Review Conference. We also obtained information from OPCW officials including the Director-General, the Deputy Director-General, the Administration Division, the Verification and Inspectorate Division, and the Office of Internal Oversight, as well as member states’ representatives to the OPCW in The Hague. To assess the reliability of the OPCW data regarding whether the member states are meeting their CWC requirements, which include the destruction of chemical weapons, we reviewed numerous OPCW and U.S. government documents, interviewed OPCW and U.S. officials, and examined OPCW's procedures for ensuring data reliability. We determined that the OPCW data was sufficiently reliable for the purposes of this engagement. In addition, we met with officials from State’s Bureau of Arms Control, the Bureau of Nonproliferation, the Bureau of Verification and Compliance, and the Bureau of Intelligence and Research in Washington, D.C., and with representatives of the intelligence community. We also met with officials at the U.S. Mission to the OPCW at The Hague. To obtain information on how the CWC is implemented in the United States we attended the June 2003 Defense Threat Reduction Agency’s CWC Orientation Course held in Fairfax, Virginia. To assess the OPCW’s efforts in conducting inspections to ensure compliance with the convention, we analyzed the CWC and various OPCW documents including Verification and Implementation Reports, annual budgets, and other reports. In The Hague, we met with Director-General, the Deputy Director-General, of the OPCW, and with officials from the Administration Division and the Verification and Inspectorate Division. We also visited the inspection laboratory and equipment store at Rijswijk, The Netherlands. To assess the reliability of the OPCW data regarding the number of inspections being conducted in the CWC member states, we reviewed numerous OPCW and U.S. government documents, interviewed OPCW and US officials, and examined OPCW's procedures for ensuring data reliability. We determined that the OPCW data was sufficiently reliable for the purposes of this engagement. To assess member states’ experiences with OPCW inspections, we spoke with numerous member states’ representatives to the OPCW. We also met with officials at the U.S. Mission to the OPCW at The Hague. In addition, we met with officials from State’s Bureau of Arms Control, the Bureau of Nonproliferation, and the Bureau of Verification and Compliance. To obtain an understanding of how OPCW inspections are conducted at military chemical weapons-related facilities in the United States, we met with Department of Defense (DOD) officials from the Defense Threat Reduction Agency. We also toured the U.S. chemical weapons destruction facility in Aberdeen, Maryland. To obtain an understanding of how OPCW inspections are conducted at commercial chemical facilities in the United States, we met with Department of Commerce officials from the Bureau of Industry and Security, Office of Nonproliferation Controls and Treaty Compliance, as well as representatives from the American Chemistry Council. In reviewing Russia’s efforts to destroy its chemical weapons stockpile, we visited the Russian Federation and obtained information from Russian government officials at the Chamber of Accounts, the Russian Munitions Agency, and the Ministry of Foreign Affairs. We also met with representatives from the Russian Duma who have funding authority over Russian chemical weapons destruction. In addition, we traveled to Shchuch’ye to observe the U.S.-funded chemical weapons destruction facility and surrounding infrastructure projects. While in Shchuch’ye, we spoke with local government officials and the Cooperative Threat Reduction program funded contractor responsible for building the Shchuch’ye facility. We obtained information from officials in the Bureau of Nonproliferation and the Bureau of Arms Control in the Department of State. At DOD, we met with officials and acquired documents from the Office of the Secretary of Defense for Cooperative Threat Reduction Policy and the Defense Threat Reduction Agency, which set policy and manage the implementation of CTR assistance to the Shchuch’ye facility. We also obtained information on international donors commitments for Russian chemical weapons destruction efforts from DOD and government representatives from Canada, Germany, and the United Kingdom. We obtained data from a variety of sources on the funding and assistance provided for Russian chemical weapon destruction efforts. To assess the reliability of these data, we interviewed officials from the United States, Canada, France, Germany, Great Britain, Italy, Russia, and the OPCW. We also asked these officials to corroborate other nations’ data wherever possible. In addition, we cross-checked the data on funding to Russia that we were given by our different sources. We determined that data on funding and assistance provided for Russian chemical weapon destruction were sufficiently reliable for the purposes of this engagement. The information on foreign law in this report does not reflect our independent legal analysis, but is based on interviews and secondary sources. We performed our work from April 2003 through March 2004 in accordance with generally accepted government auditing standards. CWPF = chemical weapons production facility CWDF = chemical weapons destruction facility CWSF = chemical weapons storage facility ACW = abandoned chemical weapons OCW = old chemical weapons DOC = discrete organic chemicals The inspection data contained in the table is through December 2002 because the OPCW could not provide more current data until it has been approved by the CWC member states. Also, the table does not include inspections of the destruction of hazardous chemical weapons or the emergency destruction of chemical weapons in the United States and Russia. The OPCW considers the inspection details for A State Party to be confidential. As of December 2003, the United States and other international donors have obligated about $525 million to develop, build, and support a chemical weapons destruction facility at Shchuch’ye. Russia has spent about $95 million. These funds support three related areas of effort: (1) the design and construction of the destruction facility, (2) the completion of infrastructure located outside the destruction facility necessary for its operation, and (3) community improvement projects in the town of Shchuch’ye. When completed, the Shchuch’ye chemical weapons destruction facility will comprise a complex of about 100 buildings and structures designed to support and complete the destruction of the chemical weapons stored at Shchuch’ye and Kizner, which represents about 30-percent of Russia’s total stockpile. The United States, through the Department of Defense’s (DOD) Cooperative Threat Reduction program, has obligated more than $460 million for the design, construction, equipment acquisition and installation, systems integration, training, and start-up of the facility. The United States plans to spend a total of more than $1 billion to finance the construction of 99 of the 100 buildings and structures within the facility, including one building where the chemical munitions will be disassembled and the chemical agent destroyed. Russia has agreed to fund the construction of a second destruction building at an estimated cost of $150 million to $175 million, according to a DOD official. Russia spent an estimated $6 to $8 million for the construction of the second destruction building. Figure 3 illustrates the buildings and structures within the destruction facility at Shchuch’ye. In March 2003, the United States began construction of the Shchuch’ye facility. Figure 4 shows the completed foundation work for the U.S. destruction building as of November 2003. Prior DOD estimates indicated that the facility would begin destroying chemical weapons in August 2008. However, in October 2003, DOD stated the facility may not be operational until July 2009. Based on the U.S. design, Russia also began constructing its destruction building at the Shchuch’ye complex in 2003, according to a DOD official, but Russia has not provided a completion date for its destruction building. Figure 5 shows the uncompleted foundation work on the Russian funded destruction building, as of November 2003. The operation of the chemical weapons destruction facility at Shchuch’ye depends upon the completion of several infrastructure projects, such as the installation of natural gas and water lines and an electric distribution station. As of October 2003, Russia had spent more than $56 million to support those projects. International donors have spent about $65 million for these and other infrastructure projects, such as the construction of access roads. About $66 million of infrastructure projects, including the installation of sewage and fiber optic lines, remain unfunded. In September 2003, Russia signed an agreement with the United States stating that it would complete all necessary infrastructure to support initial testing of the Shchuch’ye facility. In addition, Russian and U.S. officials stated that the town of Shchuch’ye lacks adequate housing, schools, roads, and other services to support the expected influx of destruction facility workers and their families. As of October 2003, the Russian government had spent more than $31 million for a variety of community improvement projects in Shchuch’ye, including a new school, improved medical facilities, and new housing. The following are GAO’s comments on the Department of State letter dated March 19, 2004. 1. State asserts that this report did not sufficiently present positive CWC accomplishments such as the continuous growth in the number of CWC member states, the identification and correction of management inefficiencies at the OPCW, and the effective implementation of the OPCW inspection regime. In response, we included additional information in this report to acknowledge the growth in the number of member states. We also cite that Libya, the sixth possessor state, acceded to the CWC in February 2004. This report does not discuss the management of the OPCW, as we previously reported on the management of the organization under the leadership of the former Director-General, Jose Bustani. We did not review the management of the OPCW under the current Director-General, Rogelio Pfirter but acknowledge that he is committed to implementing management reforms. Finally, this report clearly articulates that the OPCW has established a credible inspection regime. 2. State concluded that the entry into force of the CWC caused two previously unknown stockpiles to be discovered and accelerated chemical weapons destruction efforts. In its comments, however, State did not identify the member states that possess the unknown stockpiles. 3. State cites that of the 158 member states, 56 of 61 member states with CWC-declarable facilities have adopted national laws. This statement implies that only countries with CWC-declarable facilities should adopt national implementing laws. As stated in the report, the CWC requires all member states to adopt national implementing laws. Assistant Secretary of State for Arms Control stated in his remarks to the 2003 CWC Review Conference that the lack of national implementing laws among member states is troubling “in light of the efforts of Al Qaeda and other terrorist organizations to acquire chemical weapons.” 4. State indicated that Russia budgeted roughly $420 million for all of its chemical weapons demilitarization-related activities between 2001 and 2003 and that Russia’s approved 2004 budget requests about $180 million more. We have included this additional information in the report, as it was not previously provided to us. 5. State contends that our estimated deadline of 2014 for the complete destruction of the U.S. chemical weapons stockpile is unsubstantiated. The department further asserts that our 2027 estimate for the completion of Russia’s chemical weapons destruction assumes a single nerve agent destruction facility, at Schuch’ye and that we omit the possibility of constructing additional destruction facilities. We have clarified the 2014 deadline by adding information citing a U.S. chemical weapons destruction facility schedule that indicates that the facility will not complete its destruction operations until 2014. While we acknowledge that Russia may construct additional destruction facilities, our analysis is based on the destruction capacity of the one nerve agent destruction facility currently under construction. At this time, there are no other nerve agent destruction facilities under construction and no definitive plans for building additional facilities. Furthermore, Russia has agreed to eliminate all nerve agent at Shchuch’ye, unless otherwise agreed in writing. In a March 2004 congressional testimony, the Deputy Undersecretary of Defense for Technology Security Policy and Counterproliferation stated that the Shchuch’ye facility “will destroy all of Russia’s nerve agent inventory.” While Russian officials have indicated that Russia may construct neutralization facilities at Pochep, Leonidovka, and Maradovski, a detailed plan and/or cost estimates have yet to be provided. 6. State contends that the option of delaying further assistance to Russia could result in a greater proliferation threat. State implies that we are only presenting one option, when in fact this report provides numerous options, including providing additional assistance for Russian chemical weapons destruction. Furthermore, Congress has previously exercised the option of withholding U.S. assistance for Russian chemical weapons destruction. 7. State claims that facilities that produce discrete organic chemicals (DOC) are of little or no proliferation concern to the CWC. However, information we obtained from State, Commerce, DOD, and the OPCW, contradicts this statement. Officials and documents from all four organizations clearly expressed concern over the potential proliferation risks from DOC facilities. This report, therefore, indicates that these facilities produce a wide range of common commercial chemicals and may be capable of producing chemical weapons. 8. State cites that this report omits the fact that all existing chemical weapons production, storage, and destruction facilities have been inspected multiple times. To further clarify the inspection information contained in this report, we have included the information. The following are GAO’s comments on the Department of Defense letter dated March 18, 2004. 1. DOD stated that this report provides little or no analysis to conclude how many of those member states lacking implementing legislation truly pose a proliferation risk. In its comments, however, DOD does not offer what criteria one would use to make a determination about which member states are more important to CWC implementation. As stated in this report, the CWC requires all member states to adopt national implementing legislation after ratifying the convention. 2. According to DOD, this report does not give the visibility it should have to some of the central nonproliferation aspects of the CWC, such as a discussion of the proliferation risks associated with discrete organic chemical facilities. This report includes a specific discussion of how such dual-use facilities pose a proliferation threat because they may conceal CWC-prohibited activities. This report does not further elaborate on the degree of proliferation posed by these facilities as such information is classified. 3. DOD believes that this report is not conducive to providing a balanced perspective because it does not acknowledge successes in implementing the CWC. For example, DOD cites that progress has been made in eliminating former chemical weapons production facilities and destroying category 2 and 3 chemical weapons related munitions. Such successes, while important, remain secondary to the CWC’s primary goal of destroying actual chemical weapons. As stated in the report, the CWC is the only multilateral treaty that seeks to eliminate an entire category of weapons of mass destruction under an established time frame and verify their destruction through inspections. DOD also asserts that this report does not recognize the significant changes occurring within the OPCW. This report does not assess OPCW functions or performance because we conducted such a review of the OPCW in October 2002. This report does, however, credit the organization with finding more efficient and cost-effective means of conducting its inspection activities as it faces the challenge of meeting an increased inspection workload. In addition, we have provided information in this report to further clarify that OPCW inspectors have access to declared facilities and that there are now 161 member states to the OPCW, including Libya. 4. DOD raised a concern about this report’s option to delay financial assistance for Russia’s destruction program. The report provides a variety of policy options for decision-makers including providing more financial assistance to finance the construction of additional destruction facilities in Russia. Furthermore, Congress has restricted U.S. assistance for Russian chemical weapons destruction in the past. 5. DOD stated that this report does not adequately point out that two additional stockpiles have been added to the list of chemical weapons being destroyed. In its comments, however, DOD did not identify the member states that possess these stockpiles. If DOD had provided clarification, such information could have been included in this report, provided that the information was not classified. The following is GAO’s comment on the Organization for the Prohibition of Chemical Weapons’ letter dated March 25, 2004. 1. We made changes to this report to accurately reflect the technical comments we received from the OPCW. In addition to the individual named above, Beth A. Hoffman León, Nanette J. Ryen, Julie A. Chamberlain, and Lynn Cothern made key contributions to this report. Etana Finkler and Pierre R. Toureille also provided assistance. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The Chemical Weapons Convention (CWC) bans chemical weapons and requires their destruction by 2007, with possible extensions to 2012. The CWC also seeks to reduce the proliferation of these weapons by requiring member states to adopt comprehensive national laws to criminalize CWC-prohibited activities. The Organization for the Prohibition of Chemical Weapons (OPCW) monitors the destruction of chemical weapons and inspects declared commercial facilities in member states. GAO was asked to review (1) member states' efforts to meet key convention requirements, (2) OPCW's efforts in conducting inspections to ensure compliance with the convention, and (3) Russia's efforts to destroy its chemical weapons stockpile. The CWC has helped reduce the risks from chemical weapons, but CWC member states are experiencing delays in meeting key convention requirements as the CWC's goals have proven more difficult to achieve than anticipated. For example, we estimate that Russia and the United States will not complete destruction of their chemical weapons stockpiles until after the convention's deadline of 2012, if extended. Less than 40 percent of member states have adopted national laws to prosecute individuals that pursue CWC-prohibited activities. The Department of State also believes that China, Iran, Russia, and Sudan have not fully declared the extent of their chemical weapons programs. The OPCW faces resource challenges in addressing the proliferation threat posed by commercial facilities and inspecting an increased number of military facilities that destroy possessor states' chemical weapons. Although the OPCW has conducted nearly 1,600 inspections in 58 member states since April 1997, more than half have been conducted at military facilities. About 36 percent of OPCW commercial inspections have taken place at facilities producing the most dangerous chemicals identified by the CWC. The OPCW recognizes that it must increase the number of inspections conducted at facilities that produce dual-use chemicals. Some of these facilities may pose a proliferation threat. The lack of a credible Russian chemical weapons destruction plan has hindered and may further delay destruction efforts, leaving Russia's vast chemical weapons arsenal vulnerable to theft or diversion. As of September 2003, Russia had one operational destruction facility and had destroyed 1.1 percent of its 40,000 metric tons of chemical weapons. Russia's destruction efforts rely heavily on international assistance. Since 1993 international donors, including the United States, have obligated about $585 million for Russian destruction efforts while Russia has spent about $95 million.
The CDFI Fund provides certification to CDFIs that meet the six statutory and regulatory criteria of the Fund.on CDFIs that have the primary mission of providing capital and development services to economically distressed communities generally underserved by conventional financial institutions. CDFIs provide products and services (such as mortgage financing for low-income and first-time homebuyers and financing for not-for-profit affordable housing developers) that otherwise may not be accessible in these communities. CDFIs can be for-profit or nonprofit institutions and can be funded by private and public sources. Depository CDFIs such as community development banks and credit unions obtain capital from customers and nonmember depositors. Depository and nondepository CDFIs may obtain funding from conventional financial institutions, such as banks, in the form of loans. In addition, both types of CDFIs may receive funding from corporations, individuals, religious institutions, and private foundations. Finally, CDFIs may apply for federal grants and participate in federal loan programs. For example, Treasury’s CDFI Fund makes grants, equity investments, loans, and deposits to help CDFIs serve low-income people CDFI Fund certification is conferred and communities.administered by the Department of Agriculture and the Small Business Administration. As of December 31, 2014, there were a total of 933 certified CDFIs (411 depository and 522 nondepository). Other federal funding sources include loan programs The 12 FHLBanks are regionally based cooperative institutions owned by member financial institutions (see fig. 1). regional FHLBank, a financial institution (such as a nondepository CDFI) must meet certain eligibility requirements and purchase capital stock; thereafter, it must maintain an investment in the capital stock of the FHLBank sufficient to satisfy the minimum investment required for that institution in accordance with the FHLBank’s capital plan. On February 27, 2015, the FHLBank of Des Moines and the FHLBank of Seattle announced that the members of both FHLBanks had ratified an agreement approved by their boards of directors in September 2014 to merge. The FHLBanks anticipate that the merger will be effective by the middle of 2015. Single-family mortgage loans are loans for 1–4 unit properties. risk-management policies of the FHLBank.haircuts based on factors such as risks associated with the member’s creditworthiness, the type of collateral being pledged, and illiquidity of the collateral. The differences among nondepository CDFIs and other FHLBank members range from the degree to which they focus on community development to differences in size and supervision. Two member types— nondepository and depository CDFIs—share a primary community development focus. As noted previously, both types of CDFIs must have a primary mission of promoting community development to be certified by the CDFI Fund. CDFIs serve as intermediary financial institutions that promote economic growth and stability in low- and moderate-income communities. Frequently, CDFIs serve communities that are underserved by conventional financial institutions and may offer products and services that generally are not available from conventional financial institutions. Such products and services include mortgage financing for low-income and first-time homebuyers; homeowner or homebuyer counseling; financing for not-for-profit affordable housing developers; flexible underwriting and risk capital for needed community facilities; financial literacy training; technical assistance; and commercial loans and investments to assist start-up businesses in low-income areas. Although other FHLBank members may provide similar services to similar populations, community development may not be their primary mission. Nondepository CDFIs are smaller in asset size than most depository institution and insurance company FHLBank members. As of December 31, 2014, active members of the FHLBank System had approximately $20 trillion in assets. As shown in table 1, as of the same date, median assets for nondepository CDFI members (approximately $43 million) were lower than median assets for both depository members (approximately $207 million) and insurance company members (approximately $975 million). The largest nondepository CDFI had about $708 million in assets, while the largest insurance company member had assets of about $393 billion and the largest depository member had assets of about $2 trillion. In addition, the 30 nondepository CDFI members altogether accounted for about .01 percent of the total assets of all active FHLBank members, whereas depository and insurance company members held about 77 percent and about 23 percent of FHLBanks assets, respectively. In addition, nondepository CDFIs are not supervised by a prudential federal or state regulator unlike other FHLBank members. Depository FHLBank members are regulated and supervised by federal and state agencies that have responsibility for helping ensure the safety and soundness of the financial institutions they oversee, promoting stability in the financial markets, and enforcing compliance with applicable consumer To achieve these goals, regulators establish capital protection laws.requirements for banks and conduct on-site examinations and off-site monitoring that assesses their financial condition, including assessing their compliance with applicable laws, regulations, and agency guidance. The insured depository institutions also must submit to their regulators quarterly financial information commonly known as Call Reports that follow generally accepted accounting principles (GAAP). Insurance companies are regulated primarily by state insurance commissioners and are subject to examination. While the CDFI Fund’s review standards are not equivalent to the examination standards applicable to regulated depository institutions, the Fund requires a nondepository CDFI to submit its most recent year-to-date financial statements prepared in conformity with GAAP for certification and funding eligibility. The CDFI Fund also requires nonprofit and for-profit nondepository CDFIs receiving awards to annually submit financial statements—including information on financial position, operations, activities, and cash flows—that have been audited by an independent certified public accountant. However, only a subset of CDFIs receives CDFI Fund awards and is subject to such reporting. In addition to financial statements of individual nondepository CDFIs, other sources can provide information on the financial performance of nondepository CDFIs overall or individually. For example, the CDFI Fund reports on its analysis of financial data from nondepository CDFIs. The CDFI Snapshot Analysis for fiscal year 2012 (the most recent available at the time of our review) notes that community development loan funds, one type of nondepository CDFI, had rates of loan loss (loans that may prove uncollectible) of 1 percent, which compared favorably with depository CDFIs and mainstream financial institutions. A national network of CDFIs reported that its members’ annual net charge-off rate (debts an entity is unlikely to collect) was the same as for all FDIC- insured institutions in fiscal year 2012. It also noted that its members had provided more than $33 billion in cumulative financing for community development activities from their inception through the end of fiscal year 2012. This financing, the network reported, helped to create or maintain nearly 600,000 jobs, support the development or rehabilitation of more than 960,000 housing units, and start or expand nearly 94,000 businesses and microenterprises. And, for a fee, a community development loan fund can be assessed by an independent third party, and receive a financial strength and performance rating.rates a CDFI using a methodology similar to that used by banking regulators. In the case of financial failure, nondepository CDFIs and depository members also undergo different processes for liquidating assets to repay the FHLBanks for any advances. Depository members, including depository CDFIs, are insured by FDIC or NCUA, which means that FDIC or NCUA would serve as the receiver in the event of failure. In a typical bank or thrift failure, FDIC, acting as receiver, is responsible for outstanding advances of the failed institution. FDIC will facilitate a purchase and assumption transaction with another financial institution or sell the failed institution’s assets, including collateral that had been pledged to secure the advances, to mitigate losses to FDIC’s Deposit Insurance Fund.state insured, according to FHFA, the FHLBanks likely would go through the federal bankruptcy process to settle claims should a nondepository CDFI with FHLBank advances fail. Collateral requirements (which must be met to obtain advances) rather than the membership requirements themselves can discourage nondepository CDFIs from seeking FHLBank membership. Because regulations allow the FHLBanks to set their own thresholds for meeting some membership requirements, the requirements varied. The rates of nondepository CDFI membership also varied by FHLBank and were low. The FHLBanks generally impose collateral requirements on nondepository CDFIs that are comparable to those imposed on depository members categorized as higher risk and in some cases, comparable to those imposed on insurance companies. Officials from the nondepository CDFIs we interviewed generally cited steep haircuts (discounts) and the availability of eligible collateral as the primary challenges to obtaining advances; in addition, some viewed the requirements as a disincentive to seeking membership (because advances are a primary benefit of membership). While nondepository CDFIs must meet seven standards for FHLBank membership, the thresholds the FHLBanks set for meeting certain of the requirements varied. The Federal Home Loan Bank Act and FHFA’s regulations establish the membership requirements for nondepository CDFIs. Nondepository CDFIs must be duly organized under tribal law, or the laws of any state or the United States. be certified by the CDFI Fund. make long-term home mortgage loans, which are defined by statute to include loans secured by first liens on residential real property. Under FHFA regulations, institutions satisfy this requirement if they originate or purchase long-term first mortgage loans on single-family or multifamily residential property, or certain farm or business property that also includes a residence, or purchase mortgage pass-through securities representing an undivided ownership in such loans. By regulation, FHFA has defined “long-term” loans to include those with an original term to maturity of 5 years or more. be in a financial condition that would allow advances to be safely made to it. FHFA developed four financial condition standards for the FHLBanks to use in their assessments—a net asset ratio of at least 20 percent; positive average net income over the preceding 3 years; a ratio of loan loss reserves to loans and leases 90 days or more delinquent of at least 30 percent; and an operating liquidity ratio of at least 1.0 for the 4 most recent quarters, and for 1 or both of the 2 preceding years. If the nondepository CDFI met the standards, it would be presumed to be financially sound, and satisfy the requirement. If the CDFI did not meet one or more standards, the CDFI may offer a rebuttal and the FHLBank would perform a separate analysis to determine if the CDFI was financially sound. have management whose character is consistent with sound and economical home financing. Under FHFA’s regulations, an applicant meets this requirement if it certifies to the FHLBank that neither the CDFI nor its senior officials have been the subject of any criminal, civil, or administrative proceedings reflecting upon creditworthiness, business judgment, or moral turpitude in the past 3 years and that there are no known potential criminal, civil, or administrative monetary liabilities, lawsuits, or unsatisfied judgments arising within the past 3 years that are significant to the applicant’s operations. have a home financing policy that is consistent with sound and economical home financing. Under FHFA regulations, applicants meet this requirement if they provide a written justification, acceptable to the FHLBank, explaining how and why their home financing policy is consistent with the FHLBank System’s housing finance mission. have mortgage-related assets that reflect a commitment to housing finance. They are not required to meet the statutory requirement that applies to certain insured depository institutions to hold at least 10 percent of their assets in residential mortgage loans to be eligible for FHLBank membership. In addition, the FHLBanks also must require all new members to purchase capital stock. The FHLBanks have discretion in developing rules to assess compliance with some of the listed requirements. For example, the FHLBanks can set thresholds (such as dollar amounts or percentages) to satisfy requirements for which FHFA has not set thresholds—such as the requirement for making long-term home mortgage loans and the requirement to hold mortgage-related assets.develop its own requirement for membership stock purchases, subject to FHFA approval. We reviewed the three requirements for which the FHLBanks have discretion in making rules and found that the requirements varied across the FHLBanks. Making long-term mortgages. Eight of the 12 FHLBanks we reviewed had not developed a threshold for nondepository CDFIs to satisfy the long-term mortgage requirement, while four had specified a dollar amount or percentage of assets in long-term mortgage loans. FHFA expects that in assessing the applicant, the FHLBanks will assess the extent to which nondepository CDFIs have a commitment to housing finance requirements in light of their unique mission and community development orientation. The four FHLBanks that had quantitative minimums had minimum requirements that ranged from $1,000 to $1 million in dollar amounts, and from 1 percent to 2 percent of total assets. One FHLBank’s stated policy included an exemption from its particular minimum requirement for nondepository CDFIs that plan to incorporate long-term mortgage loans into future business strategies. Another FHLBank that had a dollar minimum recently gave a nondepository CDFI an exemption from the minimum requirement based on the assessment that the CDFI had significant commitment to housing in accordance with regulatory and membership requirements. For the remaining eight FHLBanks that did not set a minimum requirement, nondepository CDFIs can satisfy the long- term mortgage requirement by documenting that they have originated or purchased more than one such loan or qualifying mortgage investment. Mortgage-related assets. Four of the 12 FHLBanks we reviewed did not have minimum requirements for the mortgage-related asset requirement, 5 had quantitative and qualitative measures (such as an assessment of the CDFI’s housing-related activities and mission), and 3 had only quantitative measures. The highest minimum quantitative requirement for mortgage-related assets as a percentage of total assets was 10 percent. The three FHLBanks with only quantitative requirements had the lowest requirements, with one FHLBank requiring two mortgage-related assets, another requiring $1,000 in mortgage-related assets, and another requiring the lower of 1 percent of total assets or $10 million in mortgage- related assets. Stock purchases. The amount of stock that members must purchase varied according to each FHLBank’s funding strategy (see table 2). FHLBank members must hold a certain amount of membership capital stock as a continuing condition of membership. Each FHLBank determines as a part of its capital plan the amounts that all members must purchase in membership capital stock and sets its requirement based on the FHLBank’s business model. Five of the 12 FHLBanks we reviewed calculated the membership stock purchase as a percentage of the member’s total assets. The other 7 FHLBanks calculated the purchase as a percentage of a specific asset category, such as mortgage-related assets or certain assets eligible to be pledged as collateral. The FHLBanks also require members to purchase activity-based stock. That is, members must acquire a specific amount of stock based on the product—such as advances or letters of credit—the FHLBank provided to that member. The purchases are specified as a percentage of the dollar amount of each transaction the member conducted with the FHLBank. For example, among the 12 FHLBanks, the purchase requirements on advances ranged from 2 percent to 5 percent. For instance, if a member had a $2 million advance transaction with the FHLBank, it would have to purchase from $40,000 to $100,000 in capital stock. While FHLBank and CDFI industry officials we interviewed cited several membership requirements that could pose a challenge for nondepository CDFI applicants (including financial condition, long-term home mortgage loan, mortgage-related assets, and stock purchase requirements), most of the nondepository CDFIs we interviewed were able to meet these requirements or stated that they would be able to meet the requirements. Financial condition requirements. Officials we interviewed from 9 of the 12 nonmember nondepository CDFIs stated that they would be able to meet the financial condition standards, while 2 stated that they would potentially face challenges with the financial condition standards. In addition to interviewing officials from nondepository CDFIs that were nonmembers, we reviewed the applications of the 27 nondepository CDFIs that were members as of September 2014. Seven of the 27 nondepository CDFIs did not meet at least one of the financial condition standards at the time of their application, but made successful rebuttals and became members. Making long-term mortgages. Of the 12 nonmember nondepository CDFIs we interviewed, officials from 1 cited the “makes long-term home mortgage loans” requirement as a challenge for membership. In addition, officials from 1 of the 10 member nondepository CDFI we interviewed cited this as a challenge, but noted that they received an exemption from the minimum quantitative requirement imposed by the FHLBank. The officials from the remaining 11 nonmember and 9 member CDFIs did not identify this requirement as a challenge. Officials from two FHLBanks stated that CDFIs in general may face challenges meeting this requirement, as some nondepository CDFIs may not make or hold long- term home mortgage loans if they are not involved in mortgage lending. Mortgage-related assets. Although the mortgage-related asset requirement varies among the FHLBanks, none of the officials from the 12 nonmember nondepository CDFIs we interviewed stated that they would face challenges meeting this requirement. Stock-purchase requirements. Officials from 1 of the 12 nonmember CDFIs we interviewed stated that the amount of membership stock they would be required to purchase was cost prohibitive, while officials from the 10 member CDFIs we interviewed stated that the amount required was not a challenge to membership. Nondepository member CDFIs we interviewed were able to purchase the required amount of membership stock. Officials from one nonmember nondepository CDFI in the FHLBank-Chicago district said that the CDFI was approved for membership, but did not become a member because the stock purchase requirement was too high. FHLBank-Pittsburgh recently amended its capital plan by lowering the membership and activity-based stock purchase calculations, citing benefits to CDFIs. In addition, FHLBank- Chicago recently reduced its minimum membership stock purchase requirement to make it less costly for nondepository CDFIs and others to join. (We discuss these and other changes later in this report.) The rates of nondepository CDFI membership generally were low, ranging from 2.08 percent to 15.38 percent of nondepository CDFIs in each FHLBank district (see fig. 2). As of December 31, 2014, 30 of the 522 nondepository CDFIs were FHLBank members, and 6 of the 12 FHLBanks had membership rates of less than 5 percent for the nondepository CDFIs in their districts. The number of nondepository CDFI members has increased every year since the first joined in 2010. Forty percent (12 of 30) of the current nondepository CDFI members joined the FHLBank System in 2014. As of the end of 2014, all 12 FHLBanks had at least one nondepository CDFI member; 2 approved their first nondepository CDFI member in 2013 and another 3 did so in 2014. According to FHFA officials, some nondepository CDFIs may not be good candidates for FHLBank membership. They noted that the majority of nondepository CDFIs make nonhousing loans such as microloans, small business loans, and commercial loans. In addition, FHFA officials stated that many of the nondepository CDFIs engaged in housing-related activities have low asset volumes. Due to the differences between nondepository CDFIs and other FHLBank members discussed earlier, representatives from the FHLBanks stated that nondepository CDFIs have certain risks that depository members do not have. The risks cited included the lack of supervision by a regulator and uncertainty related to the liquidation process in the event of insolvency. As noted previously, the FHLBanks are required by statute and FHFA regulations to develop and implement collateral standards and other policies to mitigate the risk of default on outstanding advances. To address risks associated with nondepository CDFIs, the FHLBanks can place limits on eligible collateral and generally impose collateral requirements on nondepository CDFIs seeking advances that are comparable to those imposed on depository members categorized as higher risk and, in some cases, insurance companies. Some of the CDFIs and FHLBanks we interviewed cited these collateral requirements as a disincentive for nondepository CDFI membership. Although they are allowed by regulation to accept certain types of collateral from all of their members, some FHLBanks have chosen to limit the types of eligible collateral that nondepository CDFIs can pledge. (This is also sometimes the case for other nondepository members such as insurance companies.) FHLBanks can accept FHLBank deposits as collateral. The securities collateral FHLBanks can accept includes U.S. Treasury and agency securities, U.S. agency mortgage-backed securities, and privately issued mortgage-backed securities (including residential and commercial). The types of mortgage collateral that FHLBanks can accept include single-family and multifamily mortgage loans; mortgage or other loans issued, insured, or guaranteed by the U.S. government or its agencies; commercial real estate loans; and home equity loans or lines of credit. Nondepository CDFIs are eligible to pledge FHLBank deposits, securities, and mortgage loans as collateral for advances at all 12 FHLBanks. During the course of our work, three FHLBanks—Atlanta, New York, and Pittsburgh—changed their policies to allow mortgage loans as eligible collateral from nondepository CDFIs. Pittsburgh changed its policies in August 2014, New York in September 2014, and Atlanta in December 2014. All the other FHLBanks have had policies that allowed mortgage loans as eligible collateral from nondepository CDFIs since nondepository CDFIs became eligible for membership in 2010. Officials from FHLBanks in Atlanta, New York, and Pittsburgh stated that due to the different risks posed by nondepository CDFIs, they initially took conservative stances on accepting loan collateral. The risks they cited included the lack of a clear resolution mechanism in the case of bankruptcy and the FHLBank not being able to obtain blanket liens on pledged collateral. Within the general collateral categories (such as securities and mortgage loans), each FHLBank can impose specific collateral eligibility requirements, such as the quality of the collateral. For example, for nondepository CDFIs, one FHLBank disallows nonagency mortgage- backed securities, another FHLBank disallows commercial real estate collateral, and five FHLBanks disallow home equity lines of credit or home equity loans. At two FHLBanks, nondepository CDFIs can pledge mortgage loan collateral only if the CDFIs have certain credit ratings. The collateral requirements—specifically, the pledge method and haircuts—applicable to nondepository CDFIs seeking advances are comparable to those generally imposed on depository members categorized as higher risk and, in some cases, to those imposed on insurance companies. Based on our review of each FHLBank’s policies, all FHLBanks evaluate the creditworthiness and financial condition of their members, including nondepository CDFIs. Factors included in many of the evaluations are capital adequacy, asset quality, management quality, earnings, and liquidity. Additionally, the FHLBanks (with the exception of Topeka) assign credit ratings to their depository members that indicate Of the the creditworthiness and financial condition of these members.11 FHLBanks that assign credit ratings to depository members, 9 also assign credit ratings to nondepository CDFIs, with 2 (Atlanta and San Francisco) using a separate rating system specific to nondepository CDFIs. The remaining FHLBanks (New York and Indianapolis) do not assign credit ratings to nondepository CDFIs. While the metrics and methodology used to evaluate members differ, policies across FHLBanks generally reflect differential treatment between depository institutions and nondepository CDFIs (and other nondepository institutions such as insurance companies). For example, all FHLBanks require nondepository members to deliver collateral but generally only depository members with low credit ratings are required to list or deliver collateral. The FHLBanks differed in the extent to which they varied haircuts (discounts) for nondepository CDFIs and depository institutions. For securities collateral, eight FHLBanks imposed the same haircut on nondepository CDFIs as on depository members for all eligible types of securities collateral. In contrast, four imposed higher haircut ranges on nondepository CDFIs. For loan collateral, six FHLBanks generally applied the same haircuts to nondepository CDFIs and depository institutions. One applied a higher-range haircut for single-family mortgages to nondepository CDFIs than to depository institutions; five FHLBanks applied higher haircut ranges to nondepository CDFIs than to depository institutions; and another FHLBank applied the lower end of the haircut range to nondepository CDFIs. FHLBanks generally varied the haircut based on the types and quality of collateral, credit score or financial condition of the member, and pledge method (for loans). In general, haircuts were higher for collateral with lower ratings or of lower quality. See tables 3 and 4 for the specific haircuts each FHLBank imposed on nondepository CDFIs and depository In all cases, each FHLBank institutions for securities and loan collateral.may change these requirements at its discretion. See appendix II for more information on each FHLBank’s credit rating system and collateral requirements for advances, and how they may differ for nondepository CDFIs and depository institutions. Four FHLBanks—Des Moines, New York, Pittsburgh, and San Francisco—had conditions on advance terms and borrowing limits specific to nondepository CDFIs. In general, advance terms and conditions varied widely. For example, FHLBanks offered advances with terms to maturity ranging from overnight to 30 years. FHLBanks may establish an overall credit limit for their borrowers. For example, the overall credit limit for FHLBank-Chicago was 35 percent of a member’s total assets. However, the amount a borrower can obtain is also partly dependent upon the amount and value of qualifying collateral available to secure the advance. FHLBanks may impose additional restrictions depending on the financial condition of the borrower, such as restrictions on the type of product, term of advance, and amount of credit available. Examples of specific conditions imposed on nondepository CDFIs by the four FHLBanks include the following: FHLBank-Des Moines imposed a maximum amount of borrowing capacity and term available based on member credit ratings. Nondepository CDFIs were subject to a lower borrowing capacity than depository institutions with the same ratings. FHLBank-New York limited the maximum advance term to 5 years for nondepository CDFIs. FHLBank-Pittsburgh limited the maximum advance term to 2 years for nondepository CDFIs. FHLBank-San Francisco had a term limit of 7 years for its nondepository CDFIs. For more information on each FHLBank’s advance terms and borrowing limits for nondepository CDFIs and depository institutions, see appendix III. Officials from most of the nondepository CDFIs we interviewed cited access to low interest-rate advances from the FHLBanks as the primary benefit of membership, and some FHLBanks and nondepository CDFIs officials cited collateral requirements as challenges or disincentives to obtaining advances. Officials from three FHLBanks stated that the lack of eligible collateral was a disincentive for nondepository CDFIs seeking membership. Officials from 21 (10 members and 11 nonmembers) of the 22 nondepository CDFIs we interviewed cited access to low interest-rate advances from the FHLBanks as the primary benefit of membership. Officials from 5 of the 12 nonmember nondepository CDFIs interviewed said that they would not be interested in membership if they could not obtain advances. Officials from 10 FHLBanks and 12 (6 members and 6 nonmembers) nondepository CDFIs stated that lack of eligible collateral was a challenge to obtaining advances for nondepository CDFIs. The reasons the officials provided for lack of collateral eligibility included not possessing mortgage-related collateral, not having unencumbered assets (those free and clear of liens or claims by other creditors), and not having quality collateral that met FHLBank standards. For example, officials from FHLBank-Chicago stated that most nondepository CDFIs possessed assets, such as small business loans, that did not qualify based on statute and regulation as eligible collateral. Officials from four FHLBanks and seven nondepository CDFIs (three members and four nonmembers) stated that the requirement to pledge unencumbered assets was a challenge for nondepository CDFIs. Collateral encumbrance may occur when a CDFI is also a loan consortium that makes loans to borrowers on behalf of its members. Quality of collateral also affected collateral eligibility. For instance, officials from FHLBank-Cincinnati provided an example of a nondepository CDFI member whose collateral consisted exclusively of subprime mortgage loans. Due to the FHLBank’s constraints on exposure to subprime residential mortgage loan collateral (no more than 60 percent of borrowing capacity could stem from these loan types), the FHLBank was not able to accept the loans as collateral. Steep haircuts were cited as a disincentive to applying for advances. Officials from 6 (2 members and 4 nonmembers) of the 22 nondepository CDFIs we interviewed cited high haircuts as a disincentive for obtaining advances. For example, officials from a nondepository CDFI member said that their haircuts were very steep and that they likely will not obtain advances again unless the FHLBank eased the requirements. Officials from a nonmember nondepository CDFI in another district stated that the haircut was too restrictive. Officials from all the member nondepository CDFIs we interviewed said that FHLBank membership had not affected their business activities or that they had not considered changing their business activities to better meet the collateral requirements. However, officials from three of the nonmember nondepository CDFIs we interviewed said that they have been taking actions to obtain assets that could be used as eligible collateral. One of these nonmember nondepository CDFIs was buying mortgage-backed securities to better meet collateral requirements. Additionally, officials from five FHLBanks said that their nondepository CDFI members had changed the structure of certain loans or repositioned their assets to create eligible collateral for advances. From October 2010 to September 2014, less than half of the nondepository CDFI members obtained advances from the FHLBanks. Six FHLBanks provided 115 advances totaling about $306.7 million to 12 nondepository CDFIs during this period (see fig. 3). However, two FHLBanks provided 57 advances to four nondepository CDFIs that accounted for almost 98 percent of the total advance amount. Of the 115 advances, approximately 36.5 percent had terms of less than 1 year (including advances with overnight terms), 15.7 percent had terms of more than 1 year to less than 5 years, 44.3 percent had terms of 5 years or longer, and 3.5 percent had open terms. FHFA and FHLBanks have made efforts to broaden the participation of nondepository CDFIs in the FHLBank System. According to FHFA officials, FHFA’s final rule implementing the HERA provisions that allow nondepository CDFI membership in the FHLBank System allows for certain flexibilities in meeting membership requirements. FHFA oversight of FHLBanks did not focus on FHLBanks’ membership approval process or advance and collateral practices as it relates to nondepository CDFIs and did not identify any safety and soundness concerns or action plans. FHFA and the FHLBanks have undertaken several efforts to help promote membership of nondepository CDFIs in the FHLBank System. As noted previously, FHFA’s final rule to implement HERA provisions on nondepository CDFI membership in the FHLBank System allows for certain flexibilities in meeting membership requirements. In 2009, FHFA drafted a proposed rule that sought to amend the membership regulations and issued it for public comment. The substantive issues raised in the comments on membership focused on the criteria that FHFA proposed for FHLBanks to use in evaluating the financial condition of nondepository CDFIs applying for membership. According to FHFA officials, the CDFI community also was concerned about nondepository CDFIs not meeting basic membership requirements, such as making long-term mortgage loans and carrying mortgage-related assets. FHFA reviewed the comments and issued a final rule in January 2010. If an applicant cannot meet the presumptive financial conditions, the final FHFA regulations allow nondepository CDFIs to submit additional information demonstrating that the applicant is in sufficiently sound condition to obtain membership and advances. The final rule also did not extend the requirement to demonstrate that 10 percent of their total assets are in residential mortgage loans to nondepository CDFI applicants. FHFA oversight of FHLBanks as it relates to nondepository CDFIs did not focus on membership processes due to the low risk posed, and its oversight of collateral practices did not identify areas of concern. FHFA conducts annual examinations of the FHLBanks that cover these topics, among others. According to FHFA officials, FHFA examines FHLBanks’ membership approval processes to ensure that they comply with FHFA’s eligibility requirements and implement a risk-management process that is intended to mitigate the FHLBanks’ exposure to significant risks, especially legal, credit, and operational risk. FHFA reviewed aspects of each FHLBank’s membership process periodically in 2010 through 2013. However, according to FHFA, it did not focus on processes specific to nondepository CDFIs because nondepository CDFIs pose low safety and soundness and credit risks, in aggregate, to FHLBanks due to their low rates of membership and advances. According to FHFA officials, FHFA currently reviews each nondepository CDFI’s application for membership and has not objected to any nondepository CDFI application submitted by the FHLBanks. It primarily reviews applications to gather information about the FHLBanks’ membership approval process. In annual examinations of each FHLBank in 2010 through 2013, FHFA reviewed the FHLBanks’ collateral and advance practices for nondepository CDFIs and did not find any safety and soundness issues. FHFA’s advances and collateral examination manual calls for it to evaluate the FHLBanks’ procedures for analyzing and monitoring members, including nondepository CDFIs, and their outstanding advances. The manual also advises that special attention be given to FHLBanks’ collateral practices for CDFIs because nondepository CDFIs have no dedicated regulator. Furthermore, FHFA advises that FHLBanks’ credit risk-management procedures be tailored to address risks unique to each member type. For example, FHLBanks should consider that nondepository CDFIs likely are covered by federal bankruptcy statutes and not by the same receivership laws as insured depository institutions. FHFA and the FHLBanks have undertaken several efforts to help educate nondepository CDFIs about and promote membership in the FHLBank System. According to FHFA officials, FHFA conducted a training session and webinar on the membership rule in February 2009, followed up on questions from CDFIs about the regulations, and tracked the progress of nondepository CDFIs in gaining membership. Officials from FHFA have made themselves available for questions about and problem solving in relation to the rules. According to FHFA and FHLBank officials as well as nondepository CDFIs we interviewed, FHFA has been encouraging FHLBanks to discuss ways in which they could increase nondepository CDFI membership and access to advances in a safe and sound manner. For example, at a speech to the FHLBank boards and executive management in early 2014, FHFA encouraged all the FHLBanks to meet collectively to discuss collateral practices that might facilitate advance activity with nondepository CDFIs, and emphasized the importance of the FHLBanks’ understanding of CDFI business models and funding needs. According to FHFA officials, as a result of that speech, the FHLBanks held a conference in August 2014 with the nondepository CDFI community to discuss facilitating membership and better understand the business of nondepository CDFIs. As a follow-up to the conference, FHLBank credit officers held nondepository CDFI credit review training in October 2014. Furthermore, the FHFA Director also met with nondepository CDFI officials and trade groups in July 2014. In addition, all FHLBanks performed their own outreach to the nondepository CDFI community. For example, all the FHLBanks met with FHFA and nondepository CDFI members and nonmembers at the August 2014 conference to better understand nondepository CDFIs. Ten of the FHLBanks we interviewed have initiated discussions with and solicited membership applications from nondepository CDFIs since the conference. Some FHLBanks made changes in response to feedback from nondepository CDFI members. As noted previously, three of the FHLBanks that had restrictive collateral eligibility requirements amended these requirements to make obtaining advances easier for nondepository CDFIs. Two FHLBanks also made changes to their capital stock purchase requirements to allow a nondepository CDFI to be able to meet the stock purchase amount. According to the FHLBank officials, FHFA has been supportive of the changes they made to better accommodate nondepository CDFI membership and access to advances. FHFA officials told us that they have continued to encourage the FHLBanks to facilitate broader nondepository CDFI membership and access to advances. We provided a draft of this report to FHFA and the 12 FHLBanks for their review and comment. FHFA and four FHLBanks (Chicago, Cincinnati, Indianapolis, and Topeka) provided technical comments, which we incorporated as appropriate. The other eight FHLBanks did not provide any comments. In its comments, FHLBank-Chicago also stated that our report unfairly compares nondepository CDFIs with depository institutions and that a better comparison would be regulated institutions versus nonregulated or less regulated institutions (because claims would be handled similarly for regulated institutions). Specifically, FHLBank-Chicago noted that an FHLBank likely would go through the federal bankruptcy process to settle claims if a nondepository CDFI with FHLBank credit outstanding failed, whereas a federal or state regulator would facilitate the process to settle claims if a regulated institution such as a bank, credit union, or insurance company with FHLBank credit outstanding failed. However, the purposes of our report explicitly include discussing how nondepository CDFIs differ from other members of the FHLBank System (in particular, depository members) and the membership and collateral requirements for these CDFIs. We understand that risks vary by type of institution and noted several differences—including in supervision and the liquidation of assets—between nondepository CDFIs and other types of FHLBank members in our report. Comparing the collateral requirements for nondepository CDFIs with those for depository institutions enabled us to determine how the FHLBanks address the different risks posed by nondepository CDFIs. Moreover, in terms of resolution treatments, there is no uniform approach to settling claims even within the category of “regulated institutions.” For instance, FHFA stated in one of its advisory bulletins that “FHLBanks face risks lending to insurance companies that differ in certain respects with lending to federally-insured depository institutions” and noted that “laws dealing with a failed insured depository institution are well known and uniform across the country, whereas, the laws dealing with the failure of an insurance company are less well known to the FHLBanks and, though similar, may vary somewhat from state to state.” Therefore, we maintain that our comparisons were fair and made no change to the report in response to this comment. In another comment, FHLBank-Chicago stated that the report implies that by loosening collateral requirements (some of which are dictated by law or regulation), more nondepository CDFIs would be eligible or willing to become FHLBank members. It noted that this was not necessarily the case, as a majority of nondepository CDFIs would not qualify for membership because of their lines of business (small business lending, microlending, and commercial lending) and because they have encumbered assets. We believe that these points are already adequately addressed in our report. Specifically, in the report we note that the types of eligible collateral are dictated by regulation. In addition, we state in the report that FHFA officials told us that some nondepository CDFIs may not be good candidates for FHLBank membership because the majority of nondepository CDFIs make nonhousing loans such as microloans, small business loans, and commercial loans. Furthermore, we note that several FHLBanks and nondepository CDFIs we interviewed told us that the requirement to pledge unencumbered assets was a challenge for nondepository CDFIs. We undertook these interviews to help understand the level of demand for FHLBank membership and obtain views on any challenges associated with obtaining membership and advances. Therefore, we made no change to the report in response to this comment. In its comments, FHLBank-Indianapolis stated that the report could do a better job of making it clear that (1) FHLBanks accept assets as collateral and develop haircut methodologies to comply with regulations and an expectation of no losses in the event of default and (2) pledging illiquid assets can increase the haircut. In response, we added language in the body of the report that reiterated language in our background section stating that FHLBanks are required by statute and FHFA regulations to develop and implement collateral standards and other policies to mitigate the risk of default on outstanding advances. We also added language to the report noting that the illiquidity of assets can affect haircuts. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and members, the Director of FHFA, the Council of the FHLBanks, and the 12 FHLBanks. This report will also be available at no charge on our website at http://www.gao.gov. Should you or your staff have questions concerning this report, please contact me at (202) 512-8678 or garciadiazd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. The objectives of this report were to discuss (1) how nondepository community development financial institutions (CDFI) differ from other members of the Federal Home Loan Bank (FHLBank) System, in particular depository members; (2) the membership and collateral requirements for nondepository CDFIs and challenges posed by these requirements; and (3) Federal Housing Finance Agency (FHFA) oversight of FHLBanks in relation to nondepository CDFIs and efforts by FHFA and FHLBanks to increase participation of nondepository CDFIs in the FHLBank System. To describe differences between nondepository CDFIs and other members of the FHLBank System, we reviewed relevant sections of the Housing and Economic Recovery Act of 2008 (HERA) and FHFA’s final rule on nondepository CDFI membership in the FHLBank System. In addition, we reviewed other relevant information from the FHLBanks and CDFI industry, such as reports by the Department of the Treasury’s Community Development Financial Institutions Fund (CDFI Fund) and the Opportunity Finance Network. We determined that these studies were methodologically sound and reliable for our purposes. To compare the asset sizes of different types of FHLBank members (nondepository CDFIs, depository institutions, and insurance companies), we analyzed available data on their assets from FHFA’s membership database as of December 31, 2014. For these institution types, we calculated the distribution of their assets (minimum assets, 25th percentile, median assets, 75th percentile, and maximum assets). To assess the reliability of these data, we reviewed information about the system, interviewed knowledgeable officials, and analyzed the data for logical consistency and completeness. We found that these data were sufficiently reliable for the purpose of comparing the asset sizes of different types of FHLBank members. To address membership and collateral requirements, we reviewed relevant legislation and regulations, such as the Federal Home Loan Bank Act and FHFA’s final rule on nondepository CDFI membership. We also reviewed documentation—such as nondepository CDFI membership applications and available FHLBank guidance on assessing nondepository CDFIs for membership—from each of the FHLBanks to determine membership requirements and identify any differences among FHLBank policies.FHLBank’s requirements for membership and identified differences. For example, in the three areas where FHLBanks had discretion, the analyst determined whether FHLBanks had set a minimum quantitative or qualitative threshold that an applicant needed to meet. A second analyst then verified the accuracy of this information. Nondepository CDFIs are subject to specific financial condition requirements. We requested and received financial data from the CDFI Fund but determined that the dataset did not contain relevant data needed to determine how many nondepository CDFIs could meet these financial condition requirements. Specifically, one GAO analyst reviewed each To determine the number of nondepository CDFIs that were members from calendar years 2010 through 2014, we analyzed data from FHFA’s membership database as of December 31, 2014. To calculate the membership rate (the percentage of nondepository CDFIs in each district that were members), we used (1) data from FHFA’s membership database on the number of members as of December 31, 2014, and (2) data from the CDFI Fund on the total number of nondepository CDFIs as of December 31, 2014. We assessed the reliability of data from both systems by reviewing any relevant documentation, interviewing knowledgeable officials, and analyzing the data for logical consistency and completeness. We determined that the data were sufficiently reliable for the purposes of assessing rates of membership for nondepository CDFIs. To determine each FHLBank’s requirements for obtaining advances and any differences among the FHLBanks, we reviewed relevant documentation such as each FHLBank’s collateral guidelines and product and credit policies. Using these documents, we identified the haircut (discount) for eligible collateral types for depository and nondepository institutions and other collateral requirements, such as the term of advances and collateral pledging methods. Our review of FHLBank documents showed that FHLBanks do not describe their collateral requirements uniformly. Although we took several steps that enabled us to present comparable categories of collateral across the FHLBanks, our analysis did not account for differences in the eligibility criteria for collateral that may be accepted, such as quality of collateral. As a result, the haircuts for different FHLBanks are not comparable. First, we excluded from our analysis the following types of collateral because they were only mentioned in some FHLBanks’ documents: U.S. Treasury separate trading of registered interest and principal securities, agency structured bonds, agency collateralized mortgage obligation accrual bonds, second mortgage-backed securities, student loan asset-backed securities, agricultural real estate loans, land loans, construction loans, student loans, mutual funds, and municipal or state and local securities. Second, because some FHLBanks identified specific haircuts for securities, such as those originating from the Federal Deposit Insurance Corporation, while other FHLBanks listed haircuts for a general category of agency securities, we grouped all the agency securities and provided the range of haircuts. We included in the agency securities category any securities issued or guaranteed by the U.S. government, including those originating from the Federal Deposit Insurance Corporation, National Credit Union Administration, Fannie Mae, Freddie Mac, Ginnie Mae, the Federal Home Loan Banks, and the Small Business Administration. Third, because some FHLBanks identified specific haircuts for specific government-guaranteed loan collateral while others did not, we grouped all government-guaranteed loan collateral together, including loans originating from the Farm Service Agency, Department of Agriculture, Small Business Administration, Federal Housing Administration, and Department of Veterans Affairs. Fourth, because haircuts can vary based on the quality of the collateral pledged, we provided the range of haircuts for each type of collateral accepted by each FHLBank. While we were able to review each FHLBank’s collateral policies and procedures, the confidentiality of such information limited what we could publicly disclose in our report. Specifically, because the collateral haircut policies of the FHLBanks generally are considered proprietary information, we were unable to attribute specific policies to individual FHLBanks. Where appropriate, we used randomly assigned numbers when discussing FHLBank collateral policies to prevent disclosure of FHLBank identities. Additionally, we obtained data from each FHLBank on the amount of advances secured by each nondepository CDFI member from October 2010 to September 2014 (the most recent data available at the time of our request). We assessed the reliability of these data by obtaining information from the six FHLBanks that provided advances to nondepository CDFIs on the system they used to store the data and the procedures in place for recording and ensuring the accuracy of the data. We also reviewed the data for logical consistency and completeness. We determined that the data were sufficiently reliable for reporting the amount of advances obtained by nondepository CDFs. We also interviewed officials from the 12 FHLBanks, 3 trade groups, 10 nondepository CDFIs that were members of the FHLBanks, and 12 nondepository CDFIs that were not members to understand the level of demand for FHLBank membership and obtain views on any challenges associated with membership processes and obtaining advances. To develop the purposive, nonrandom sample of 10 nondepository FHLBank member CDFIs to interview, we selected a nondepository CDFI from each of the 10 FHLBanks that had a nondepository CDFI member as of March 31, 2014 (the most recent data available when we began our work and selected members to interview). In addition to geographic diversity, we sought variation in asset size, financial institution type, and FHLBank advance status. We also selected a purposive, nonrandom sample of 12 nondepository CDFIs that were not members of the FHLBank System, one from each of the 12 FHLBank districts. We selected these 12 from a sample of nondepository CDFIs that were identified during our meetings with member CDFIs and CDFI trade groups as being interested in FHLBank membership. In addition to geographic diversity, we sought variation in asset size when selecting nonmembers to interview. We interviewed officials from all 22 nondepository CDFIs by telephone, focusing on the background of the CDFI and its experience with and opinions of the FHLBank membership and advance processes. The views expressed by the nondepository CDFIs in our sample cannot be generalized to the entire population of nondepository CDFIs. To evaluate FHFA’s oversight, we reviewed relevant laws, legislative history, and regulations (including its final rule on nondepository CDFI membership) to identify FHLBanks’ authority to expand membership to nondepository CDFIs and FHFA’s oversight authority. We also reviewed FHFA examination policies related to membership and collateral requirement to obtain advances. To determine if membership and advance practices were reviewed and there were any findings, we analyzed each FHLBank’s examination results for fiscal years 2010 through 2013 (the most recent examinations available at the time of our request). We interviewed FHFA and the 12 FHLBanks to further understand examination policies and practices for membership and advances and discuss any FHFA efforts to facilitate broader nondepository CDFI participation in the FHLBank System. We conducted this performance audit from May 2014 to April 2015, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The collateral requirements—specifically the pledge method for loan collateral and haircuts (discounts)—assessed on advances to nondepository community development financial institutions (CDFI) vary from those imposed on depository members. For example, all Federal Home Loan Banks (FHLBanks) require nondepository CDFIs to deliver collateral (a requirement that also would be applied to higher-risk depository institutions), and in some cases, nondepository CDFIs receive higher haircuts than depository institutions. For each FHLBank, we compare the pledge method and haircuts applied to depository institutions and nondepository CDFIs below (see table 5). Most Federal Home Loan Banks (FHLBanks) do not have advance terms and borrowing limits specific to nondepository community development financial institutions (CDFI). However, four FHLBanks (Des Moines, New York, Pittsburgh, and San Francisco) do have specific advance terms and borrowing limits. We summarize the advance terms and borrowing limits for each FHLBank below (see table 6). In addition to the contact named above, Paige Smith (Assistant Director), Akiko Ohnuma (Analyst-in-Charge), Farah Angersola, Evelyn Calderon, Pamela Davidson, Kerri Eisenbach, Courtney LaFountain, John McGrail, Marc Molino, Barbara Roesmann, Jim Vitarello, and Weifei Zheng made key contributions to this report.
The Housing and Economic Recovery Act of 2008 (HERA) made nondepository CDFIs eligible for membership in the FHLBank System. The System includes 12 regional FHLBanks that make loans, known as advances, to their members at favorable rates. GAO was asked to review the FHLBanks' implementation of HERA provisions relating to nondepository CDFIs. Among other things, this report discusses (1) challenges posed by membership and collateral requirements for nondepository CDFIs, and (2) FHFA and FHLBank efforts to facilitate broader nondepository CDFI participation in the System. GAO analyzed data on membership rates as of December 2014 and advances obtained as of September 2014; reviewed requirements for gaining membership and obtaining advances; and interviewed FHLBank and FHFA officials and a sample of nondepository CDFIs based on selected criteria, including geography and asset size. Specifically, GAO interviewed 10 nondepository CDFIs that were members (one from each FHLBank district with a nondepository CDFI member when GAO began work) and 12 that were not members (one from each of the 12 districts). GAO makes no recommendations in this report. GAO provided a draft of this report to FHFA and the 12 FHLBanks for comment. FHFA and four FHLBanks provided technical comments that were incorporated into the report as appropriate. Collateral requirements rather than membership requirements discouraged some nondepository community development financial institutions (CDFI)—loan or venture capital funds—from seeking membership in the Federal Home Loan Bank (FHLBank) System. CDFIs are financial institutions that provide credit and financial services to underserved communities. Less than 6 percent of nondepository CDFIs (30 of 522) were members of the System as of December 2014 (see figure). Requirements for membership (such as stock purchase amounts) can vary where regulation gives FHLBanks discretion, but nondepository CDFIs GAO interviewed generally stated these requirements did not present a challenge. In addition, most FHLBanks imposed collateral requirements on nondepository CDFIs—such as haircuts (discounts on the value of collateral)—comparable with those for depository members categorized as higher risk. (This was sometimes also the case for other nondepository members such as insurance companies.) FHLBank officials stated nondepository CDFIs have different risks compared with depository members (for example, nondepository CDFIs are not supervised by a prudential federal or state regulator as are other FHLBank members). To address these risks, they imposed more restrictive requirements. Some of the nondepository CDFIs GAO interviewed cited limited availability of eligible collateral and steep haircuts as challenges for obtaining advances and therefore a disincentive to seeking membership. Less than half of the nondepository CDFIs that were members as of September 2014 had borrowed from the FHLBanks; the cumulative advances from October 2010 to September 2014 totaled about $307 million (less than 1 percent of the total advances outstanding as of December 2014). Two FHLBanks made the majority of the advances. The Federal Housing Finance Agency (FHFA), which oversees the System, and FHLBanks have facilitated efforts to broaden nondepository CDFI participation in the System by educating about and promoting membership to nondepository CDFIs. For example, FHFA officials told us that they encouraged the FHLBanks to hold a conference to discuss nondepository CDFI membership. Officials from 10 FHLBanks also stated that they had solicited applications from CDFIs. In late 2014, several FHLBanks amended stock purchase and collateral requirements to better accommodate nondepository CDFI membership and access to advances.
The concept of “universal service” has traditionally meant providing residential telephone subscribers with nationwide access to basic telephone services at reasonable rates. The Telecommunications Act of 1996 broadened the scope of universal service to include, among other things, support for schools and libraries. The act instructed the commission to establish a universal service support mechanism to ensure that eligible schools and libraries have affordable access to and use of certain telecommunications services for educational purposes. In addition, Congress authorized FCC to “establish competitively neutral rules to enhance, to the extent technically feasible and economically reasonable, access to advanced telecommunications and information services for all public and nonprofit elementary and secondary school classrooms . . . and libraries. . . .” Based on this direction, and following the recommendations of a Federal-State Joint Board on Universal Service, FCC established the schools and libraries universal service mechanism that is commonly referred to as the E-rate program. The program is funded through statutorily mandated payments by companies that provide interstate telecommunications services. Many of these companies, in turn, pass their contribution costs on to their subscribers through a line item on subscribers’ phone bills. FCC capped funding for the E-rate program at $2.25 billion per year, although funding requests by schools and libraries can greatly exceed the cap. For example, schools and libraries requested more than $4.2 billion in E-rate funding for the 2004 funding year. In 1998, FCC appointed USAC as the program’s permanent administrator, although FCC retains responsibility for overseeing the program’s operations and ensuring compliance with the commission’s rules. In response to congressional conference committee direction, FCC has specified that USAC “may not make policy, interpret unclear provisions of the statute or rules, or interpret the intent of Congress.” USAC is responsible for carrying out the program’s day-to-day operations, such as maintaining a Web site that contains program information and application procedures; answering inquiries from schools and libraries; processing and reviewing applications; making funding commitment decisions and issuing funding commitment letters; and collecting, managing, investing, and disbursing E-rate funds. FCC permits—and in fact relies on—USAC to establish administrative procedures that program participants are required to follow as they work through the application and funding process. Under the E-rate program, eligible schools, libraries, and consortia that include eligible schools and libraries may receive discounts for eligible services. Eligible schools and libraries may apply annually to receive E- rate support. The program places schools and libraries into various discount categories, based on indicators of need, so that the school or library pays a percentage of the cost for the service and the E-rate program funds the remainder. E-rate discounts range from 20 percent to 90 percent. USAC reviews all of the applications and related forms and issues funding commitment decision letters. Generally, it is the service provider that seeks reimbursement from USAC for the discounted portion of the service rather than the school or library. FCC established an unusual structure for the E-rate program but has never conducted a comprehensive assessment of which federal requirements, policies, and practices apply to the program, to USAC, or to the Universal Service Fund itself. FCC recently began to address a few of these issues, concluding that as a permanent indefinite appropriation, the Universal Service Fund is subject to the Antideficiency Act and that USAC’s issuance of commitment letters constitutes obligations for purposes of the act. However, FCC’s conclusions concerning the status of the Universal Service Fund raise further issues relating to the collection, deposit, obligation, and disbursement of those funds—issues that FCC needs to explore and resolve comprehensively rather than in an ad hoc fashion as problems arise. The Telecommunications Act of 1996 neither specified how FCC was to administer universal service to schools and libraries nor prescribed the structure and legal parameters of the universal service mechanisms to be created. To carry out the day-to-day activities of the E-rate program, FCC relied on a structure it had used for other universal service programs in the past—a not-for-profit corporation established at FCC’s direction that would operate under FCC oversight. However, the structure of the E-rate program is unusual in several respects compared with other federal programs: FCC appointed USAC as the permanent administrator of the Universal Service Fund, and FCC’s Chairman has final approval over USAC’s Board of Directors. USAC is responsible for administering the program under FCC orders, rules, and directives. However, USAC is not part of FCC or any other government entity; it is not a government corporation established by Congress; and no contract or memorandum of understanding exists between FCC and USAC for the administration of the E-rate program. Thus, USAC operates and disburses funds under less explicit federal ties than many other federal programs. Questions as to whether the monies in the Universal Service Fund should be treated as federal funds have troubled the program from the start. Even though the fund has been listed in the budget of the United States and, since fiscal year 2004, has been subject to an annual apportionment from the Office of Management and Budget (OMB), the monies are maintained outside of Treasury accounts by USAC and some of the monies have been invested. The United States Treasury implements the statutory controls and restrictions involving the proper collection and deposit of appropriated funds, including the financial accounting and reporting of all receipts and disbursements, the security of appropriated funds, and agencies’ responsibilities for those funds. Since the inception of the E-rate program, FCC has struggled with identifying the nature of the Universal Service Fund and the managerial, fiscal, and accountability requirements that apply to the fund. In the past, FCC’s Inspector General (IG) has noted that the commission could not ensure that Universal Service Fund activities were in compliance with all laws and regulations because the issue of which laws and regulations were applicable to the fund was unresolved. During our review, FCC officials told us that the commission has substantially resolved the IG’s concerns through recent orders, including FCC’s 2003 order that USAC begin preparing Universal Service Fund financial statements consistent with generally accepted accounting principles for federal agencies (GovGAAP) and keep the fund in accordance with the United States Government Standard General Ledger. While it is true that these steps and other FCC determinations should provide greater protections for universal service funding, FCC has addressed only a few of the issues that need to be resolved. In fact, staff from the FCC’s IG’s office told us that they do not believe the commission’s GovGAAP order adequately addressed their concerns because the order did not comprehensively detail which fiscal requirements apply to the Universal Service Fund and which do not. FCC maintains that it has undertaken a timely and extensive analysis of the significant legal issues associated with the status of the Universal Service Fund and has generally done so on a case-by-case basis. We recognize that FCC has engaged in internal deliberations and external consultations and analysis of a number of statutes. However, we do not believe that this was done in a timely manner or that it is appropriate to do this on a case-by-case basis, which puts FCC and the program in the position of reacting to problems as they occur rather than setting up an organization and internal controls designed to ensure compliance with applicable laws. As you know, Mr. Chairman, a problem with this ad hoc approach was dramatically illustrated with regard to the applicability of the Antideficiency Act to the Universal Service Fund. In October 2003, FCC ordered USAC to prepare financial statements for the Universal Service Fund, as a component of FCC, consistent with GovGAAP, which FCC and USAC had not previously applied to the fund. In February 2004, staff from USAC realized during contractor-provided training on GovGAAP procedures that the commitment letters sent to beneficiaries (notifying them whether their funding is approved and in what amount) might be viewed as “obligations” of appropriated funds. If so viewed, and if FCC also found the Antideficiency Act—which does not allow an agency or program to make obligations in excess of available budgetary resources— to be applicable to the E-rate program, then USAC would need to dramatically increase the program’s cash-on-hand and lessen the program’s investments to provide budgetary authority sufficient to satisfy the Antideficiency Act. As a result, USAC suspended funding commitments in August 2004 while waiting for a commission decision on how to proceed. At the end of September 2004—facing the end of the fiscal year— FCC decided that commitment letters were obligations; that the Antideficiency Act did apply to the program; and that USAC would need to immediately liquidate some of its investments to come into compliance with the Antideficiency Act. According to USAC officials, the liquidations cost the fund approximately $4.6 million in immediate losses and could potentially result in millions in foregone annual interest income. In response to these events, in December 2004, Congress passed a bill granting the Universal Service Fund a one-year exemption from the Antideficiency Act. As we explain more fully in our report, Mr. Chairman, we agree with FCC’s determinations that the Universal Service Fund is a permanent appropriation subject to the Antideficiency Act and that its funding commitment decision letters constitute recordable obligations of the Universal Service Fund. However, there are several significant fiscal law issues that remain unresolved. We believe that where FCC has determined that fiscal controls and policies do not apply, the commission should reconsider these determinations in light of the status of universal service monies as federal funds. For example, in view of its determination that the fund constitutes an appropriation, FCC needs to reconsider the applicability of the Miscellaneous Receipts Statute, 31 U.S.C. § 3302, which requires that money received for the use of the United States be deposited in the Treasury unless otherwise authorized by law. FCC also needs to assess the applicability of other fiscal control and accountability statutes (e.g., the Single Audit Act and the Cash Management Improvement Act). Another major issue that remains to be resolved involves the extent to which FCC has delegated some functions for the E-rate program to USAC. For example, are the disbursement policies and practices for the E-rate program consistent with statutory and regulatory requirements for the disbursement of public funds? Are some of the functions carried out by USAC, even though they have been characterized as administrative or ministerial, arguably inherently governmental activities that must be performed by government personnel? Resolving these issues in a comprehensive fashion, rather than continuing to rely on reactive, case-by- case determinations, is key to ensuring that FCC establishes the proper foundation of government accountability standards and safeguards for the E-rate program and the Universal Service Fund. We are encouraged that FCC just announced that it has contracted with the National Academy of Public Administration (NAPA) for NAPA to study and explore alternative models to the current organizational and governance structure of the Universal Service Fund program. We believe this study will go a long way toward addressing the concerns outlined in our report and we look forward to seeing the results of NAPA’s efforts. Although $13 billion in E-rate funding has been committed to beneficiaries during the past 7 years, FCC did not develop useful performance goals and measures to assess the specific impact of these funds on schools’ and libraries’ Internet access and to improve the management of the program, despite a recommendation by us in 1998 to do so. At the time of our current review, FCC staff was considering, but had not yet finalized, new E-rate goals and measures in response to OMB’s concerns about this deficiency in a 2003 OMB assessment of the program. One of the management tasks facing FCC is to establish strategic goals for the E-rate program, as well as annual goals linked to them. The Telecommunications Act of 1996 did not include specific goals for supporting schools and libraries, but instead used general language directing FCC to establish competitively neutral rules for enhancing access to advanced telecommunications and information services for all public and nonprofit private elementary and secondary school classrooms and libraries. As the agency accountable for the E-rate program, FCC is responsible under the Government Performance and Results Act of 1993 (Results Act) for establishing the program’s long-term strategic goals and annual goals, measuring its own performance in meeting these goals, and reporting publicly on how well it is doing. For fiscal years 2000 through 2002, FCC’s goals focused on achieving certain percentage levels of Internet connectivity during a given fiscal year for schools, public school instructional classrooms, and libraries. However, the data that FCC used to report on its progress was limited to public schools (thereby excluding two other major groups of beneficiaries—private schools and libraries) and did not isolate the impact of E-rate funding from other sources of funding, such as state and local government. This is a significant measurement problem because, over the years, the demand for internal connections funding by applicants has exceeded the E-rate funds available for this purpose by billions of dollars. Unsuccessful applicants had to rely on other sources of support to meet their internal connection needs. Even with these E-rate funding limitations, there has been significant growth in Internet access for public schools since the program issued its first funding commitments in late 1998. At the time, according to data from the Department of Education’s National Center for Educational Statistics (NCES), 89 percent of all public schools and 51 percent of public school instructional classrooms already had Internet access. By 2002, 99 percent of public schools and 92 percent of public school instructional classrooms had Internet access. Yet although billions of dollars in E-rate funds have been committed since 1998, adequate program data was not developed to answer a fundamental performance question: How much of the increase since 1998 in public schools’ Internet access has been a result of the E-rate program, as opposed to other sources of federal, state, local, and private funding? Performance goals and measures are used not only to assess a program’s impact but also to develop strategies for resolving mission-critical management problems. However, management-oriented goals have not been a feature of FCC’s performance plans, despite long-standing concerns about the program’s effectiveness in key areas. For example, two such goals—related to assessing how well the program’s competitive bidding process was working and increasing program participation by low- income and rural school districts and rural libraries—were planned but not carried forward. FCC did not include any E-rate goals for fiscal years 2003 and 2004 in its recent annual performance reports. The failure to measure effectively the program’s impact on public and private schools and libraries over the past 7 years undercuts one of the fundamental purposes of the Results Act: to have federal agencies adopt a fact-based, businesslike framework for program management and accountability. The problem is not just a lack of data for accurately characterizing program results in terms of increasing Internet access. Other basic questions about the E-rate program also become more difficult to address, such as the program’s efficiency and cost-effectiveness in supporting the telecommunications needs of schools and libraries. For example, a review of the program by OMB in 2003 concluded that there was no way to tell whether the program has resulted in the cost-effective deployment and use of advanced telecommunications services for schools and libraries. OMB also noted that there was little oversight to ensure that the program beneficiaries were using the funding appropriately and effectively. In response to these concerns, FCC staff have been working on developing new performance goals and measures for the E-rate program and plan to finalize them and seek OMB approval in fiscal year 2005. FCC testified before Congress in June 2004 that it relies on three chief components in overseeing the E-rate program: rulemaking proceedings, beneficiary audits, and fact-specific adjudicatory decisions (i.e., appeals decisions). We found weaknesses with FCC’s implementation of each of these mechanisms, limiting the effectiveness of FCC’s oversight of the program and the enforcement of program procedures to guard against waste, fraud, and abuse of E-rate funding. As part of its oversight of the E-rate program, FCC is responsible for establishing new rules and policies for the program or making changes to existing rules, as well as providing the detailed guidance that USAC requires to effectively administer the program. FCC carries out this responsibility through its rulemaking process. FCC’s E-rate rulemakings, however, have often been broadly worded and lacking specificity. Thus, USAC has needed to craft the more detailed administrative procedures necessary to implement the rules. However, in crafting administrative procedures, USAC is strictly prohibited under FCC rules from making policy, interpreting unclear provisions of the statute or rules, or interpreting the intent of Congress. We were told by FCC and USAC officials that USAC does not put procedures in place without some level of FCC approval. We were also told that this approval is sometimes informal, such as e-mail exchanges or telephone conversations between FCC and USAC staff. This approval can come in more formal ways as well, such as when the commission expressly endorses USAC operating procedures in commission orders or codifies USAC procedures into FCC’s rules. However, two problems have arisen with USAC administrative procedures. First, although USAC is prohibited under FCC rules from making policy, some USAC procedures deal with more than just ministerial details and arguably rise to the level of policy decisions. For example, in June 2004, USAC was able to identify at least a dozen administrative procedures that, if violated by the applicant, would lead to complete or partial denial of the funding request even though there was no precisely corresponding FCC rule. The critical nature of USAC’s administrative procedures is further illustrated by FCC’s repeated codification of them throughout the history of the program. FCC’s codification of USAC procedures—after those procedures have been put in place and applied to program participants— raises concerns about whether these procedures are more than ministerial and are, in fact, policy changes that should be coming from FCC in the first place. Moreover, in its August 2004 order (in a section dealing with the resolution of audit findings), the commission directs USAC to annually “identify any USAC administrative procedures that should be codified in our rules to facilitate program oversight.” This process begs the question of which entity is really establishing the rules of the E-rate program and raises concerns about the depth of involvement by FCC staff with the management of the program. Second, even though USAC procedures are issued with some degree of FCC approval, enforcement problems could arise when audits uncover violations of USAC procedures by beneficiaries or service providers. The FCC IG has expressed concern over situations where USAC administrative procedures have not been formally codified because commission staff have stated that, in such situations, there is generally no legal basis to recover funds from applicants that failed to comply with the USAC procedures. In its August 2004 order, the commission attempted to clarify the rules of the program with relation to recovery of funds. However, even under the August 2004 order, the commission did not clearly address the treatment of beneficiaries who violate a USAC administrative procedure that has not been codified. FCC’s use of beneficiary audits as an oversight mechanism has also had weaknesses, although FCC and USAC are now working to address some of these weaknesses. Since 2000, there have been 122 beneficiary audits conducted by outside firms, 57 by USAC staff, and 14 by the FCC IG (2 of which were performed under agreement with the Inspector General of the Department of the Interior). Beneficiary audits are the most robust mechanism available to the commission in the oversight of the E-rate program, yet FCC generally has been slow to respond to audit findings and has not made full use of the audit findings as a means to understand and resolve problems within the program. First, audit findings can indicate that a beneficiary or service provider has violated existing E-rate program rules. In these cases, USAC or FCC can seek recovery of E-rate funds, if justified. In the FCC IG’s May 2004 Semiannual Report, however, the IG observes that audit findings are not being addressed in a timely manner and that, as a result, timely action is not being taken to recover inappropriately disbursed funds. The IG notes that in some cases the delay is caused by USAC and, in other cases, the delay is caused because USAC is not receiving timely guidance from the commission (USAC must seek guidance from the commission when an audit finding is not a clear violation of an FCC rule or when policy questions are raised). Regardless, the recovery of inappropriately disbursed funds is important to the integrity of the program and needs to occur in a timely fashion. Second, under GAO’s Standards for Internal Controls in the Federal Government, agencies are responsible for promptly reviewing and evaluating findings from audits, including taking action to correct a deficiency or taking advantage of the opportunity for improvement. Thus, if an audit shows a problem but no actual rule violation, FCC should be examining why the problem arose and determining if a rule change is needed to address the problem (or perhaps simply addressing the problem through a clarification to applicant instructions or forms). FCC has been slow, however, to use audit findings to make programmatic changes. For example, several important audit findings from the 1998 program year were only recently resolved by an FCC rulemaking in August 2004. In its August 2004 order, the commission concluded that a standardized, uniform process for resolving audit findings was necessary, and directed USAC to submit to FCC a proposal for resolving audit findings. FCC also instructed USAC to specify deadlines in its proposal “to ensure audit findings are resolved in a timely manner.” USAC submitted its Proposed Audit Resolution Plan to FCC on October 28, 2004. The plan memorializes much of the current audit process and provides deadlines for the various stages of the audit process. FCC released the proposed audit plan for public comment in December 2004. In addition to the Proposed Audit Resolution Plan, the commission instructed USAC to submit a report to FCC on a semiannual basis summarizing the status of all outstanding audit findings. The commission also stated that it expects USAC to identify for commission consideration on at least an annual basis all audit findings raising management concerns that are not addressed by existing FCC rules. Lastly, the commission took the unusual step of providing a limited delegation to the Wireline Competition Bureau (the bureau within FCC with the greatest share of the responsibility for managing the E-rate program) to address audit findings and to act on requests for waiver of rules warranting recovery of funds. These actions could help ensure, on a prospective basis, that audit findings are more thoroughly and quickly addressed. However, much still depends on timely action being taken by FCC, particularly if audit findings suggest the need for a rulemaking. In addition to problems with responding to audit findings, the audits conducted to date have been of limited use because neither FCC nor USAC have conducted an audit effort using a statistical approach that would allow them to project the audit results to all E-rate beneficiaries. Thus, at present, no one involved with the E-rate program has a basis for making a definitive statement about the amount of waste, fraud, and abuse in the program. Of the various groups of beneficiary audits conducted to date, all were of insufficient size and design to analyze the amount of fraud or waste in the program or the number of times that any particular problem might be occurring programwide. At the time we concluded our review, FCC and USAC were in the process of soliciting and reviewing responses to a Request for Proposal for audit services to conduct additional beneficiary audits. Under FCC’s rules, program participants can seek review of USAC’s decisions, although FCC’s appeals process for the E-rate program has been slow in some cases. Because appeals decisions are used as precedent, this slowness adds uncertainty to the program and impacts beneficiaries. FCC rules state that FCC is to decide appeals within 90 days, although FCC can extend this period. At the time of our review there was a substantial appeals backlog at FCC (i.e., appeals pending for longer than 90 days). Out of 1,865 appeals to FCC from 1998 through the end of 2004, approximately 527 appeals remain undecided, of which 458 (25 percent) are backlog appeals. We were told by FCC officials that some of the backlog is due to staffing issues. FCC officials said they do not have enough staff to handle appeals in a timely manner. FCC officials also noted that there has been frequent staff turnover within the E-rate program, which adds some delay to appeals decisions because new staff necessarily take time to learn about the program and the issues. Additionally, we were told that another factor contributing to the backlog is that the appeals have become more complicated as the program has matured. Lastly, some appeals may be tied up if the issue is currently in the rulemaking process. The appeals backlog is of particular concern given that the E-rate program is a technology program. An applicant who appeals a funding denial and works through the process to achieve a reversal and funding two years later might have ultimately won funding for outdated technology. FCC officials told us that they are working to resolve all backlogged E-rate appeals by the end of calendar year 2005. In summary, Mr. Chairman, we remain concerned that FCC has not done enough to proactively manage and provide a framework of government accountability for the multibillion-dollar E-rate program. Lack of clarity about what accountability standards apply to the program causes confusion among program participants and can lead to situations where funding commitments are interrupted pending decisions about applicable law, such as happened with the Antideficiency Act in the fall of 2004. Ineffective performance goals and measures make it difficult to assess the program’s effectiveness and chart its future course. Weaknesses in oversight and enforcement can lead to misuse of E-rate funding by program participants that, in turn, deprives other schools and libraries whose requests for support were denied due to funding limitations. To address these management and oversight problems identified in our review of the E-rate program, our report recommends that the Chairman of FCC direct commission staff to (1) conduct and document a comprehensive assessment to determine whether all necessary government accountability requirements, policies, and practices have been applied and are fully in place to protect the E-rate program and universal service funding; (2) establish meaningful performance goals and measures for the E-rate program; and (3) develop a strategy for reducing the E-rate program’s appeals backlog, including ensuring that adequate staffing resources are devoted to E-rate appeals. We conducted our work from December 2003 through December 2004 in accordance with generally accepted government auditing standards. We interviewed officials from FCC’s Wireline Competition Bureau, Enforcement Bureau, Office of General Counsel, Office of Managing Director, Office of Strategic Planning and Policy Analysis, and Office of Inspector General. We also interviewed officials from USAC. In addition, we interviewed officials from OMB and the Department of Education regarding performance goals and measures. OMB had conducted its own assessment of the E-rate program in 2003, which we also discussed with OMB officials. We reviewed and analyzed FCC, USAC, and OMB documents related to the management and oversight of the E-rate program. The information we gathered was sufficiently reliable for the purposes of our review. See our full report for a more detailed explanation of our scope and methodology. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee may have. For further information about this testimony, please contact me at (202) 512-2834. Edda Emmanuelli-Perez, John Finedore, Faye Morrison, and Mindi Weisenbloom also made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since 1998, the Federal Communications Commission's (FCC) E-rate program has committed more than $13 billion to help schools and libraries acquire Internet and telecommunications services. Recently, allegations of fraud, waste, and abuse by some E-rate program participants have come to light. As steward of the program, FCC must ensure that participants use E-rate funds appropriately and that there is managerial and financial accountability surrounding the funds. This testimony is based on GAO's February 2005 report GAO-05-151 , which reviewed (1) the effect of the current structure of the E-rate program on FCC's management of the program, (2) FCC's development and use of E-rate performance goals and measures, and (3) the effectiveness of FCC's program oversight mechanisms. FCC established the E-rate program using an organizational structure unusual to the government without conducting a comprehensive assessment to determine which federal requirements, policies, and practices apply to it. The E-rate program is administered by a private, not-for-profit corporation with no contract or memorandum of understanding with FCC, and program funds are maintained outside of the U.S. Treasury, raising issues related to the collection, deposit, obligation, and disbursement of the funding. While FCC recently concluded that the Universal Service Fund constitutes an appropriation and is subject to the Antideficiency Act, this raises further issues concerning the applicability of other fiscal control and accountability statutes. These issues need to be explored and resolved comprehensively to ensure that appropriate governmental accountability standards are fully in place to help protect the program and the fund from fraud, waste, and abuse. FCC has not developed useful performance goals and measures for assessing and managing the E-rate program. The goals established for fiscal years 2000 through 2002 focused on the percentage of public schools connected to the Internet, but the data used to measure performance did not isolate the impact of E-rate funding from other sources of funding, such as state and local government. A key unanswered question, therefore, is the extent to which increases in connectivity can be attributed to E-rate. In addition, goals for improving E-rate program management have not been a feature of FCC's performance plans. In its 2003 assessment of the program, OMB noted that FCC discontinued E-rate performance measures after fiscal year 2002 and concluded that there was no way to tell whether the program has resulted in the cost-effective deployment and use of advanced telecommunications services for schools and libraries. In response to OMB's concerns, FCC is currently working on developing new E-rate goals. FCC's oversight mechanisms contain weaknesses that limit FCC's management of the program and its ability to understand the scope of any fraud, waste, and abuse within the program. According to FCC officials, oversight of the program is primarily handled through agency rulemaking procedures, beneficiary audits, and appeals decisions. FCC's rulemakings have often lacked specificity and led to a distinction between FCC's rules and the procedures put in place by the program administrator--a distinction that has affected the recovery of funds for program violations. While audits of E-rate beneficiaries have been conducted, FCC has been slow to respond to audit findings and make full use of them to strengthen the program. In addition, the small number of audits completed to date do not provide a basis for accurately assessing the level of fraud, waste, and abuse occurring in the program, although the program administrator is working to address this issue. According to FCC officials, there is also a substantial backlog of E-rate appeals due in part to a shortage of staff and staff turnover. Because appeal decisions establish precedent, this slowness adds uncertainty to the program.
Between fiscal years 1998 and 2002, HUD administered a total of 21 technical assistance programs, most of which are associated with programs in its offices of Community Planning and Development and Public and Indian Housing. The other three offices that administer technical assistance programs are the offices of Housing, Fair Housing and Equal Opportunity, and Healthy Homes-Lead Hazard Control. Table 1 lists the 21 technical assistance programs, by program office, and their budgets. As shown in Figure 1, from fiscal year 1998 through fiscal year 2002, the annual funding for all of HUD’s technical assistance programs ranged from $128 million to $201 million. These sums accounted for less than 1 percent of HUD’s overall budget, which averaged about $28 billion in each of those years. Technical assistance funds fluctuated each year because the funds for specific technical assistance programs increased or decreased or because technical assistance programs were introduced or discontinued in any given year. For example, technical assistance funding increased by 43 percent from fiscal year 1998 to fiscal year 1999. During this time, the technical assistance funds (1) increased from $9 million to $17 million for the Office of Troubled Agency Recovery, (2) were initiated in 1999 with $11 million for Resident Opportunities and Self-Sufficiency, and (3) increased from $18 million to $25 million for section 4 capacity building under the Community Development Block Grant program. From fiscal year 2001 to fiscal year 2002, estimated, technical assistance funding fell by about 10 percent, primarily because the Lead-Based Paint Hazard Reduction funds were reduced from $22 million to $5 million, the HOME funds were reduced from $22 million to $12 million, the HOPE VI funds were reduced from $10 million to $6.3 million, and the Drug Elimination Grant Program and its technical assistance funds were abolished. Figure 2 illustrates the breakdown of the cumulative technical assistance funding from fiscal year 1998 through fiscal year 2002 by program office. Not surprisingly, the two offices that administer the largest number of programs have the largest share of the overall technical assistance budget. While the overriding purpose of technical assistance is to improve the ability of program participants to administer HUD’s programs more effectively, each HUD program office determines its own approach and administers technical assistance according to its program needs. Table 2 describes the purpose of the technical assistance as defined by the five HUD program offices. HUD provides appropriated funds both for its primary programs and for related technical assistance programs. It distributes the program funds to program participants such as state and local governments and other participating organizations, and it awards the technical assistance funds to providers, which use the money to deliver technical assistance to recipients. Figure 3 illustrates this process. The recipients of HUD’s technical assistance are generally those entities or organizations that administer HUD’s programs. They also vary by program and include state and local governments, public and Indian housing agencies, tenants of federally subsidized housing, and property owners receiving federal housing subsidies. The providers of technical assistance can be HUD officials but typically are entities or organizations that receive funding from HUD to deliver such assistance. Providers, which also vary by program, include community- based, for-profit, and nonprofit organizations; public and Indian housing agencies; housing finance agencies; and resident service organizations. We visited with technical assistance providers in selected locations across the country to observe the various methods used by each of the five program offices to deliver technical assistance to recipients. In the following examples, each case details the recipients, providers, and purpose of the technical assistance provided. The recipients of the Office of Community Planning and Development’s technical assistance are local nonprofit organizations, state and local governments, and other organizations participating in and receiving funds through HUD’s community development programs. The providers of these technical assistance programs are for-profit and nonprofit organizations and government agencies that have demonstrated expertise in providing the guidance and training that program participants can use. For 2 days, we observed a technical assistance provider for the HOME program work with two community housing development organizations in Arkansas. The purpose of the technical assistance was to help the organizations plan for and improve their procedures for developing low-income rural housing. Over the 2 days, the technical assistance provider evaluated the housing built by the community development organizations with HOME program funds and advised them on HUD-mandated procedures for counseling prospective low-income home buyers. The recipients of technical assistance provided through the Office of Public and Indian Housing’s Resident Opportunities and Self-Sufficiency Program’s capacity building funds are associations of public housing residents that HUD has determined lack the capacity to administer welfare-to-work programs or conduct management activities. The providers of the technical assistance are resident and other nonprofit organizations. We observed a 1-day conference conducted by a Massachusetts statewide public housing tenant organization in conjunction with several other organizations. The training was designed to increase the knowledge and build the capacity of public housing agencies, their residents, and state and local officials involved in planning and rulemaking. Topics included income recertification, methods of influencing housing legislation, public housing safety and security, and private-market housing initiatives. A Boston HUD employee served as a panel member during one of the training sessions. The recipients of the Office of Fair Housing and Equal Opportunity’s technical assistance include state and local fair housing enforcement agencies, public and private nonprofit fair housing agencies, and other groups that are working to prevent and eliminate discriminatory housing practices. According to an official from the Office of Fair Housing and Equal Opportunity, providers of technical assistance are HUD staff and qualified, established fair housing enforcement agencies. We observed a Fair Housing employee in HUD’s San Francisco regional office provide technical assistance training to 10 employees of California’s Department of Fair Employment and Housing. The objective was to help the state agency process fair housing complaints more effectively, and the topics included tips on investigating fair housing complaints, theories of discrimination, and case conciliation and evidence. The recipients of technical assistance provided through the Office of Housing’s Outreach and Technical Assistance Grants are tenants living in federally subsidized properties affected by mortgage restructuring through the Mark-to-Market program. The providers of technical assistance are small or large community-based organizations that focus on improving tenant’s ability to understand the restructuring of their Section 8 property. In Columbus, Ohio, we observed a meeting between the potential new owners of a HUD property scheduled to undergo financial restructuring and two organizations representing the tenants who live there. The purpose of the meeting, coordinated by a technical assistance provider, was to give tenants a role in the restructuring process and to keep them apprised of potential changes to their building. Topics discussed included rent stabilization, building renovations, security systems, and modifications for handicapped accessibility. The recipients of technical assistance provided through the Office of Healthy Homes and Lead Hazard Control’s Technical Studies Programs include state, local, and tribal governments; private property owners; and individuals who are maintenance and renovation workers. The providers of technical assistance include academic and nonprofit organizations, state and local governments, and federally recognized Indian tribes. We observed a technical assistance provider conduct mandatory classroom training for about 50 owners and workers of federally subsidized properties at a Philadelphia housing authority maintenance facility. The recipients hoped to become certified to remove lead-based paint hazards from their properties by learning safe work practices at the training. The course covered such topics as lead exposure and maintenance work, lead safety, and quality assurance. HUD selects technical assistance providers both competitively and noncompetitively. Seventeen of the 21 technical assistance programs used a competitive selection process. Because Congress specifies the organizations to provide the technical assistance under three of Community Planning and Development’s Block Grant Programs, HUD distributes the funds for those programs noncompetitively. The fourth noncompetitive program, the Fair Housing Assistance program, is noncompetitive because the funds are distributed through a formula grant to all eligible state and local fair housing enforcement agencies. The process for obtaining an award also varies by funding instrument. HUD has a set policy explaining the procedures and protocols for using the various funding instruments (contracts, grants, and cooperative agreements). When HUD selects technical assistance providers competitively, it awards funding through contracts, grant agreements, and cooperative agreements. HUD refers to all three award mechanisms as funding instruments. A contract is used when the principal purpose of the award is the acquisition by purchase, lease, or barter of property or services for the direct benefit of the government. According to the Director of the Office of Departmental Grants Management and Oversight, contracts are the award instrument that gives HUD the most control because HUD simply directs the contractor to do a specific task. For example, a program official in the Office of Native American Programs told us that her office retains decision-making authority by issuing contracts that enable her to control the technical assistance providers’ use of funds and outreach to recipients. A grant agreement is used when the principal purpose of the relationship between the awardee and HUD is the transfer of money or property for a public purpose and substantial federal involvement is not anticipated. A cooperative agreement’s purpose is similar to a grant agreement’s purpose, but is generally used when the awarding agency anticipates the need for close federal involvement over the life of the award. The cooperative agreement stipulates the nature, character, and extent of the anticipated involvement. A HUD official told us that a cooperative agreement generally gives HUD less control than a contract, but more control than a grant agreement. HUD’s Office of Departmental Grants Management and Oversight provides basic guidelines on when to use a contract, grant, or cooperative agreement. According to HUD, a program office, when selecting the appropriate funding instrument to be used, should first look to the program’s authorizing legislation for authority to enter into a contract or other type of arrangement. Noncompetitive awards are specified by statute or based on a formula. Specifically, Congress appropriates technical assistance funds noncompetitively for the Local Initiative Support Corporation, the Enterprise Foundation, Habitat for Humanity, Youthbuild USA, and the Housing Assistance Council under the Community Development Block Grant (CDBG) program, administered by HUD’s Office of Community Planning and Development. Congress also appropriates noncompetitive funding for National American Indian Housing Council technical assistance programs, administered by the Office of Pubic and Indian Housing. In addition, HUD’s Office of Fair Housing and Equal Opportunity uses a formula to distribute Fair Housing and Assistance technical assistance funds. These noncompetitive, technical assistance programs comprised $50.1 million in fiscal year 2001, about 25 percent of the technical assistance funding for that year and about $54.5 million, or 30 percent of the fiscal year 2002 technical assistance funding. Prospective technical assistance providers respond either to a HUD request for a proposal for a contract or to a Notice of Funding Availability (NOFA) for a grant or cooperative agreement. In practice, HUD has issued the funding notices for the majority of its grants and cooperative agreements, including its technical assistance funding, in a single notice called the SuperNOFA (Super Notice of Funding Availability). Applicants submit contract proposals or funding applications to HUD staff who make recommendations to each program office’s selecting officials. These officials then make the final selections and announce the awards. Contract proposals are managed through HUD headquarters or designated contracting offices, while applications for grants or cooperative agreements for some technical assistance programs are submitted to both headquarters and the field office in which the applicant is seeking to provide services. Any award, regardless of the type of funding instrument, has a fixed performance period. The contract request for proposal or NOFA will stipulate the proposed period of performance and indicate whether additional funding can be provided beyond the period of performance without further competition. The five offices that administer technical assistance have basic oversight procedures in place. Such procedures usually include monitoring the technical assistance provider’s performance by reviewing payment requests and financial reports, and providing a written evaluation of the technical assistance provider’s performance. Most program offices require technical assistance providers to submit quarterly, annual, or close-out reports, or a combination of these reports, on the status of their technical assistance programs, which are to be reviewed by HUD program staff. Headquarters or field office staff may be directly responsible for oversight, depending on which office administers the technical assistance, though headquarters offices are ultimately responsible for ensuring that appropriate oversight is conducted. HUD does not offer any central guidance on, or require its program offices to directly measure, the impact or outcomes of the technical assistance programs they administer. The Government Performance and Results Act of 1993 (GPRA) requires that program officials develop performance measures and track performance relative to the goals in their strategic and annual plans. However, according to the Director of HUD’s Office of Departmental Operations and Coordination, this requirement does not apply to the related technical assistance programs. In his view, if the technical assistance supports the program and the program is doing well, then the technical assistance is having a positive impact. However, GPRA emphasizes the importance of establishing objective and quantifiable measures at each organizational level that can be linked to the overall agency program goals. Without specific measures on the impact of its technical assistance, HUD cannot demonstrate the incremental value of the assistance. The Director of the Office of Departmental Grants Management and Compliance told us that HUD is not planning any initiatives to coordinate how program offices are measuring the impact of their technical assistance programs. An official from the Massachusetts State Office of Community Planning and Development told us that without this guidance, it is unclear how the impact of these services should be measured. We found a wide range of HUD processes for measuring the impact of technical assistance, ranging from CPD’s section 4 capacity building organizations, which document detailed evaluations of their accomplishments; to CPD’s Rural Housing and Economic Development program, which collects annual outcome data; to Public and Indian Housing’s Resident Opportunity Self Sufficiency Program, which has no established process and measures performance on a grant-by-grant basis. While some program officials have said that it is difficult or not even possible to measure the impact of technical assistance, other program offices have impact measures in place. A Public and Indian Housing (PIH) field official from the Office of Native American Programs told us that he has seen nationwide training courses that he believes are inefficient and expensive. While he believes that local one-on-one training would be more productive, he does not believe he could measure whether attendees are retaining the information received or whether one-on-one training would be more effective. By contrast, a PIH official said that the office conducts evaluations after the technical assistance for drug elimination is provided and then follows-up with another evaluation in 6 months to measure recipients’ retention of information. We also spoke with a technical assistance provider who administers multiple questionnaires to measure recipients’ retention of material taught at homeless training programs. Similarly, Chicago CPD staff reported that they measure the success of technical assistance programs aimed at teaching local groups how to apply for federal grants by the number of grantees that submit proper paperwork.
This testimony discusses the results of GAO's review of the Department of Housing and Urban Development's (HUD) technical assistance and capacity-building programs. Technical assistance programs can be defined as training designed to improve the performance or management of program recipients, such as teaching one-on-one procurement regulations to housing authority staff. Capacity building can be generally defined as funding to strengthen the capacity or capability of program recipients or providers--typically housing or community development organizations--thereby building the institutional knowledge within those organizations. The overall goal of both technical assistance and capacity building is to enhance the delivery of HUD's housing and community development programs. HUD administers 21 technical assistance programs through five program offices. From fiscal year 1998 through fiscal year 2002, the annual funding for HUD technical assistance ranged between $128 million and $201 million, accounting for less than 1 percent of HUD's overall budget each year. Although the general purpose of HUD's technical assistance is to help program participants carry out HUD program goals, each program office designs technical assistance specifically related to its programs. Recipients could be states and units of local government, public or Indian housing agencies, private and nonprofit organizations, or individuals. Providers could be HUD officials or, more commonly, state or local governments, profit and nonprofit organizations, or public housing agencies. HUD awards funding for 17 of the 21 technical assistance programs competitively. The funding for the remaining programs is awarded noncompetitively. HUD uses three types of funding instruments and determines which type to use on the basis of its relationship with the awardee and the level of federal involvement anticipated. All five HUD program offices perform basic oversight of the technical assistance they administer, such as visually observing the technical assistance or reviewing reports submitted by the providers to ensure that the technical assistance was provided. In addition, some program offices also have impact measures in place. HUD does not measure the impact or outcome of technical assistance and does not offer any central guidance on how the program offices should measure its impact.
The District’s prekindergarten through grade 12 school system is composed of 128 public schools with enrollment for the 2008-2009 school year around 45,200. Historically, DCPS has had several problems that interfere with the education of its students. One primary problem was the dysfunction of the central office. For example, textbooks were not delivered on time or at all, parents complained about the lack of responsiveness of the central office, and teachers were not always paid on time. In addition, data systems were obsolete and inundated with errors, making it difficult to access basic information, such as the number of students enrolled at a school and student attendance rates. Such problems persisted in the D.C. public school system for several years despite numerous efforts to address them. In 1989, a report by the D.C. Committee on Public Education noted declining achievement levels as students moved through grades, the poor condition of the school system’s physical facilities, and the lack of accountability among D.C. agencies for the schools. Recent reports have continued to cite these problems. In 2004, the Council of the Great City Schools reviewed the D.C. school system and cited the continued failure to improve student achievement. Efforts to improve the District’s schools often included new leadership to head the troubled school system. Over the last 20 years, DCPS has employed more than seven superintendents with an average tenure of 2.9 years. Such frequent changes in leadership may have further complicated efforts to improve student achievement, as each leader may have brought a different cadre of initiatives and goals which were not fully developed or implemented with the constant changes in leadership. In 2006, an analysis of the school system’s reform efforts by a consulting firm found no progress in student achievement and recommended a change in governance to improve student achievement and system-wide accountability. In response to the problems facing the District’s public school system, the D.C. Council (the legislative branch of the D.C. government) approved the 2007 Reform Act, which significantly altered the governance of the D.C. public schools. The Reform Act transferred the day-to-day management of the public schools from the Board of Education to the Mayor and placed DCPS under the Mayor’s office as a cabinet-level agency. Prior to the Reform Act, the head of D.C. public schools was selected by and reported to the Board of Education. The Reform Act also moved the state functions into a new state superintendent’s office, established a separate facilities office, and created the D.C. Department of Education headed by the Deputy Mayor for Education. The Deputy Mayor’s Office and the state superintendent’s office are also cabinet-level offices in the D.C. government structure. Although the District of Columbia is not a state, its Office of the State Superintendent of Education serves as the District’s state education agency. Prior to the Reform Act, state functions and local functions were conducted in one office which led to problems with oversight and monitoring. Further, the District was and continues to be on the U.S. Department of Education’s (Education) high-risk list for its management of federal education grants. The Reform Act addressed such issues by clearly separating the two entities. Along with managing, distributing, and monitoring the use of federal funds across DCPS and the public charter schools, the office of the state superintendent has a significant policy role. For example, the state superintendent’s office works collaboratively with the State Board of Education to set standards of what students should learn in all the District’s public schools. In addition, in carrying out NCLBA, the state superintendent’s office is responsible for the state-wide assessment, or standardized test, that measures students’ progress in attaining proficiency and sets annual proficiency targets. The state superintendent’s office also delineates requirements for teacher licensure and, within the guidelines provided by NCLBA, determines the District’s definition of “highly qualified teachers.” In addition to these policy functions, the state superintendent’s office also provides support to D.C. public schools and the public charter schools. For example, the office can offer training and technical assistance on a variety of topics, such as the appropriate use and tracking of federal education funds. In January 2002, Congress passed NCLBA which requires states to focus on increased expectations for academic performance and accountability. Under NCLBA, states are required to establish performance goals and hold schools that receive federal funds under Title I of NCLBA accountable for student performance by determining whether or not they have made adequate yearly progress (AYP). The failure to make AYP, or meet academic targets, for 2 or more consecutive years leads to specific actions that schools must take to improve student academic achievement. These actions, such as developing a school improvement plan or extending the school day, are more intensive the longer the school fails to meet academic targets. After 5 or more consecutive years of failing to meet academic targets, a school must make plans to restructure its governance and implement those plans the subsequent year. NCLBA specifies five options for restructuring schools: reopening as a charter school, replacing all or most of the school staff relevant to the failure to make AYP, contracting with another organization to run the school, turning the operation of the school over to the state, or undertaking another action that would result in restructuring the school’s governance. NCLBA also establishes a federal requirement for teacher quality. It requires that teachers across the nation be “highly qualified” in every core subject they teach by the end of the 2005-2006 school year. In general, NCLBA requires that teachers have a bachelor’s degree, have state certification, and demonstrate subject area knowledge for every core subject they teach. States also have flexibility to set the requirements that teachers need to meet to demonstrate that they are highly qualified. In March 2008, the state superintendent’s office and the D.C. State Board of Education revised the District’s highly qualified teacher definition to better align it with NCLBA’s definition and allow more teachers to be considered highly qualified. Officials from the state superintendent’s office contend that the District’s previous highly qualified definition was more stringent than federal standards and disqualified good teachers from joining the D.C. public school system. The Recovery Act was enacted in February 2009 to promote economic recovery, make investments, and minimize and avoid reductions in state and local government services. About $100 billion of the $787 billion funds included in the Recovery Act are targeted to support education at the state and local level. Some of the Recovery Act funds support existing programs, such as Title I of the Elementary and Secondary Education Act, as amended by NCLBA, and parts of the Individuals with Disabilities Education Act. In addition, the new State Fiscal Stabilization Fund provides funds to restore state support for elementary and secondary education, public higher education, and early childhood education programs and services. The District will receive an estimated $148 million of Recovery Act funds to support its education programs. The current teacher compensation system used by most school districts in the United States dates back to the 1920s and pays teachers based on their level of education and years of experience. However, many school districts have begun to experiment with alternative methods of compensation that reward teachers on certain elements of performance, such as improving student achievement, filling hard-to-staff positions, and taking on additional responsibilities. Some school districts offer bonuses for all staff or all teachers at schools who have met certain criteria (usually including an increase in student achievement). Other school districts offer differentiated pay to teachers based on characteristics other than education and years of experience. For example, the Denver Public School District has implemented a teacher compensation plan that allows multiple pathways to compensation bonuses. Bonuses can be based on professional evaluations using a standards-based system, progress toward objectives as agreed upon by teachers and their principal, and growth in student achievement on the Colorado Student Assessment Program. Teachers may receive additional incentives for filling hard-to-staff positions. The Denver plan is funded through a tax levy, federal grants, and private funding. National teachers’ unions approve of some types of differentiated or incentive pay. Specifically, the American Federation of Teachers, which is the parent union of the Washington Teachers’ Union, has taken the position that teacher compensation plans could include financial incentives to teachers who acquire additional knowledge and skills or agree to teach in low-performing and hard-to-staff schools. In addition, the American Federation of Teachers supports incentive pay for school-wide improvement. During the first 2 years of its reform efforts, DCPS implemented several classroom-based initiatives to improve students’ basic skills in core subjects and implemented a new staffing model designed to give all students access to art, music, and physical education classes. In addition, as required by NCLBA, DCPS restructured 22 schools before the fall of 2008, after the schools failed to meet academic targets for 6 consecutive years. Restructuring will be ongoing as the vast majority of DCPS schools are in some form of school improvement status under NCLBA. In addition, DCPS and the state superintendent’s office are planning and developing new ways to use data to monitor student achievement and school performance. DCPS is refocusing or revising its approach to many of these initiatives as it continues to implement them. During the first 2 years of reform, DCPS quickly implemented various initiatives intended to improve student achievement. For example, to improve students’ basic skills and standardized test scores in reading and math, DCPS introduced targeted interventions for students struggling in math and reading and provided additional instruction and practice to improve students’ responses to open-ended questions, including test questions. DCPS also introduced Saturday classes primarily targeted to students in grades 3 through 12 who were on the cusp of meeting academic targets on standardized tests. It also introduced initiatives designed to address student motivation and behavior. For example, DCPS piloted the Capital Gains program with the specific goals of improving student engagement, and ultimately student learning, by offering financial incentives to students for attendance, academic performance, and other positive behaviors. Table 1 provides a list of DCPS’s major initiatives to improve student outcomes, as well as descriptions and the status of these initiatives. Recently, the Chancellor acknowledged that DCPS, in its effort to remedy the range of issues that plagued the District’s public schools, may have launched too many initiatives at once. The Chancellor noted that some schools may have lacked the capacity to implement so many programs effectively. In particular, some schools were undergoing significant organizational changes that may have affected their ability to implement these new academic initiatives. To support such schools, DCPS is considering offering a choice of programs for schools and allowing the principals to determine which programs best suit their schools’ needs and capacity. DCPS does not yet know how successful these programs have been in improving student achievement. While DCPS students achieved gains on the 2008 state-wide test, increasing between 8 and 11 percentage points in math and reading for both elementary and secondary levels, it is unclear whether these gains can be attributed to the current reform efforts or to prior efforts. While DCPS officials told us that it is generally difficult to isolate and quantify the impact of any single program on student achievement, they were able to review an analysis of reading scores conducted by the vendor of one of its early reading programs. The vendor’s analysis showed that on some tests DCPS students who participated in the reading program generally scored higher than those who did not. Further, DCPS officials told us they plan to analyze, in late summer of 2009, student outcomes, including state-wide test scores, to assess the effectiveness of various interventions. In addition, DCPS officials told us the success of the math and reading initiatives depended in part on how well teachers implemented them in the classroom. They also noted that there were varying levels of teacher quality and knowledge of effective teaching practices, and that it was difficult to ensure the extent to which teachers implemented the programs effectively. While DCPS had not defined “effective” teaching prior to the rollout of the above initiatives, officials told us that moving forward, they will focus on practicing effective teaching, as opposed to implementing various disparate programs. DCPS is developing a framework that is intended to help teachers understand the priorities moving forward, including understanding what students are expected to learn for each subject, how to prepare lessons, and effective teaching methods to be used. According to DCPS officials, this framework will be aligned to teacher evaluations. DCPS plans to implement this framework by the beginning of the 2009-2010 school year. In an effort to ensure that all students would have access to certain subjects and supports, DCPS changed the way it allocated teachers across its schools for the 2008-2009 school year. This new staffing model was intended to provide all schools with a core of teachers including art, music, and physical education, as well as social workers. It also was intended to provide all schools with reading coaches who work with teachers to improve reading instruction. Prior to this change, DCPS allocated funding to schools using a weighted student formula, which distributed funds to schools on a per pupil basis, so that the greater the enrollment of a school, the greater the amount allocated to that school. Principals then chose how to staff the school based on the amount of funding available, staffing requirements, and their perception of the school’s needs. Consequently, some schools—especially smaller schools— did not have the student enrollment to support programs, such as music and art, and other schools that had the funds to support those programs opted not to do so. While the new staffing model ensures a core staff at all schools regardless of enrollment, DCPS allowed principals to request changes based on their school’s needs. However, DCPS lacked a transparent process for making changes to the staffing allocation. In particular, DCPS did not establish or communicate clear guidance or criteria on how such requests would be treated. Further, DCPS granted or denied requests for changes to the original staffing allocation on a school- by-school basis, and it is unclear whether similar requests were treated in a consistent manner. A more transparent process, one that made public their rationale for decisions, would have helped assure stakeholders, including the D.C. Council, that changes to staffing allocations were made consistently and fairly. The D.C. Council and several community groups have criticized the process for its lack of transparency and questioned the fairness of the decisions made. For example, one independent analysis concluded that some schools received less per pupil funding than others with similar student populations. In addition, DCPS officials told us that in some cases, the changes to the original staffing model resulted in schools being granted allocations beyond their budgeted amounts. DCPS revamped its approach for the staffing model for the 2009-2010 school year to address some of these challenges. For example, it established guidance about what changes it will allow principals to make to the staffing model and disseminated this guidance to school leaders at the beginning of the budgeting process. According to DCPS, the new guidance is expected to reduce the number of changes that principals request later in the process. During the summer of 2008, DCPS closed 23 schools primarily due to low student enrollment. Students from the closed schools, about 5,000 students according to DCPS, enrolled in 1 of 26 schools, referred to as receiving schools. DCPS updated facilities at these receiving schools to accommodate the influx of students from the newly closed schools. In addition, to assist these students and schools with the transition that this reorganization created, DCPS offered a more comprehensive version of its staffing model. In addition to the core staff of the standard staffing model, DCPS allocated additional staff, such as school psychologists and math coaches to the receiving schools. During the consolidation effort, DCPS also created several prekindergarten through grade 8 schools in some cases where elementary schools were underenrolled. In addition, according to DCPS, these prekindergarten through grade 8 schools were intended to create a smoother transition to middle school and reduce the number of elementary schools with different grade levels preparing students for the same middle or junior high school. By closing the 23 underenrolled schools, DCPS estimates it was able to redirect $15 million from administrative and facility costs to support these additional staff. The eight principals we interviewed at receiving schools provided mixed reports about the adequacy of their staffing allocations. On the one hand, three principals reported having adequate staff, and two others cited minor issues. The remaining three principals cited issues such as teacher skill levels, teacher vacancies, and inadequate training to accommodate an influx of special education students. In addition, as required by NCLBA, DCPS restructured 22 of its lowest performing schools for the 2008-2009 school year after the schools failed to meet academic targets for 6 consecutive years. NCLBA specifies five options for restructuring schools, including replacing selected staff or contracting with another organization or company to run the school (table 2 lists the various NCLBA options and the options DCPS selected for the 2008-2009 school year). At 18 of the 22 schools in restructuring, DCPS replaced the school staff—principals, teachers, and/or administrative support staff—who were deemed relevant to the failure to meet academic targets. For the remaining schools in restructuring, DCPS elected to contract with other organizations or undertake other actions, such as adding more intensive school-level services to support students and families. Restructuring underperforming schools will likely be an ongoing initiative for DCPS, as 89 of its 118 schools are in some form of school improvement status. (See fig. 2 for more details on DCPS’s school improvement status.) DCPS revamped its process for determining the most appropriate restructuring option for the 13 schools that will be restructured in the 2009-2010 school year. Prior to implementing the first round of restructuring (i.e., for the 2008-2009 school year), DCPS officials told us there were insufficient school visits and inadequate training and guidance for teams assigned to evaluate which restructuring option was best suited for a given school. For example, the initial process called for review teams to visit each school once, which according to DCPS officials, did not allow the teams to obtain sufficient evidence to evaluate the schools’ condition. DCPS has addressed these issues by requiring two visits to each school, offering more training, and revising the form used to evaluate each school’s condition for the next round of restructuring. In addition, DCPS officials told us they cannot continue to rely on replacing teachers and principals as the primary restructuring option because DCPS cannot terminate the teachers, and moving these teachers to other schools may undermine the District’s reform efforts. DCPS did not assess its capacity for replacing staff at schools restructured in the 2008- 2009 school year. According to DCPS, nearly half of the 160 teachers that were removed from these schools had to be placed at 38 other DCPS schools. For the 2009-2010 school year, DCPS has decided to replace select staff at 6 of the 13 schools that will be restructured. (For more details, see the section on teacher and principal quality later in this report.) DCPS reported it has ongoing and planned initiatives to expand data access to principals and teachers, in part to monitor student and school performance. In particular, DCPS reported it made improvements to its primary student data system so central office users can better monitor school performance. For example, DCPS officials reported that they consolidated several student data systems by February 2009, including the system containing standardized test scores, into the primary student data system with the intent to improve data accuracy and consistency. They also told us they added software to the primary student data system that enabled central office employees to develop monthly reports of schools’ performance data, such as attendance and test scores. DCPS plans to eventually use these monthly reports to enable school leaders to better monitor student progress, and plans to develop an internal Web site that compiles various student and school information in one place for key stakeholders including central office staff and principals. However, DCPS officials told us they have delayed some of these efforts while they attempt to improve coordination among the various departments that were developing and disseminating information to school leaders. DCPS has not yet announced when the project will be completed. See table 3 for more details about key DCPS data initiatives and their status. The state superintendent’s office also is developing a longitudinal database, called the Statewide Longitudinal Education Data Warehouse (SLED) that is intended to allow DCPS and other stakeholders to access a broad array of information, including standardized test scores of students and information on teachers. SLED is intended to allow the District to track student registration and movement among DCPS’s schools and the public charter schools more accurately, as well as expand the District’s ability to monitor student achievement and growth over time. According to officials in the state superintendent’s office, they revised the project schedule to allow more time to assist the charter schools with updating their data systems. In February 2009, the initial release of student data provided a student identification number and information on student eligibility for free or reduced-price lunches and other student demographics for all students attending DCPS’s schools and the public charter schools. The state superintendent’s office plans for SLED to enable DCPS to link student and teacher data by February 2010. (See table 4 for more details about the status of key SLED deliverables.) This link is to provide DCPS with data on the classes students enrolled in, the teachers that taught the classes, any academic interventions students received, students’ grades and test scores, and student demographics. DCPS is attempting to improve the quality of its teacher and principal workforce by hiring new teachers and principals and by providing professional development. After the 2007-2008 school year, about one-fifth of the teachers and one-third of the principals resigned, retired, or were terminated from DCPS. However, DCPS officials told us that the 2007-2008 and 2008-2009 teacher evaluation process did not allow them to assess whether the teacher workforce improved between these 2 school years. In addition, DCPS introduced professional development initiatives for teachers and principals, but late decisions about the program for teachers led to inconsistent implementation. DCPS focused on a workforce replacement strategy to strengthen teacher and principal quality. DCPS maintains that the quality of teachers is the single greatest determinant to improving student achievement, and a growing body of research has shown that teacher quality is a significant factor in improving student academic performance. Yet it is often difficult to remove teachers for performance issues beyond their initial, or probationary, years in a given school system. For example, in the 2006- 2007 school year, only 1 teacher was removed from DCPS for poor performance out of more than 4,000 teachers. Representatives from the Washington Teachers’ Union agreed that there were several poor performing teachers in DCPS, but stated that the 2-year probationary period is the appropriate time to identify and dismiss poor teachers at will. DCPS began implementing its teacher replacement strategy near the end of the 2007-2008 school year. Specifically, about one-fifth of the teachers and one-third of the principals resigned, retired, or were terminated from the school system at the end of the 2007-2008 school year. DCPS terminated about 350 teachers, approximately 100 of whom were released for underperformance at the end of their probationary period, when tenure decisions were made. The remaining 250 teachers were terminated because they did not meet specified time frames to become highly qualified under NCLBA. An additional 400 teachers accepted financial incentives offered by DCPS to resign or retire in the spring of 2008. A DCPS official told us there is anecdotal evidence suggesting DCPS lost some quality teachers through the contract buyouts, but officials noted that DCPS did not have measures in place to deter effective teachers from accepting the buyouts. In addition, DCPS did not renew the contracts of 42 principals, citing their failure to improve student achievement on standardized tests and to adequately implement school-wide programs. To replace the teachers and principals who left the system, DCPS launched a nationwide recruitment effort for the 2008-2009 school year. DCPS hired 566 teachers and 46 principals for the 2008-2009 school year. Of the 566 teachers, 395 were hired from traditional backgrounds or other school systems and 171 came from nontraditional paths such as the D.C. Teaching Fellows program and Teach for America. (See fig. 3 for more details about the flow of teachers into and out of DCPS between the 2007- 2008 and 2008-2009 school years.) However, DCPS did not have a new teacher contract in place due to ongoing negotiations with the Washington Teachers’ Union and officials told us this may have hindered their efforts to attract top-quality teachers. The Chancellor has stated that she wants to recruit and retain quality teachers by offering merit pay, which would reward teachers with higher salaries based, in part, on their students’ scores on standardized state tests. Under the plan, which has been in negotiation with the Washington Teachers’ Union since November 2007, teachers could voluntarily relinquish job protections in exchange for base salaries and bonuses totaling over $100,000 per school year. This plan relies on over $200 million in contributions from private foundations to fund the teacher contract, including salary increases and professional development. According to the Chancellor, private foundations continue to pledge their support, even with the current economic downturn. DCPS officials told us the higher annual salaries and bonuses would be sustainable with public funds if private funding is not available when the 5-year contract expires. In addition, an official told us DCPS does not have an adequate means to assess whether its teacher workforce improved between the 2007-2008 and 2008-2009 school years because the current teacher evaluation system is not an effective way to assess teacher performance. Under this evaluation system, principals evaluate teachers’ subject matter knowledge, classroom management skills, and adherence to academic standards, among other elements. However, this system does not measure teachers’ impact on student achievement, which, according to DCPS, is a key factor in evaluating teacher effectiveness. In addition, according to DCPS, teacher evaluations conducted in prior years did not adequately distinguish excellent from poor performance—almost all teachers received satisfactory ratings. As a result, DCPS officials told us they cannot determine the quality of the 566 new teachers relative to the 817 teachers who left the system. The current teacher evaluation system remains the primary mechanism for identifying teachers considered ineffective. During the 2008-2009 school year, principals used the evaluation system to place 147 tenured teachers deemed underperforming on 90-day improvement plans. At the end of 90 school days, principals decide whether to retain or terminate these teachers. In prior years, DCPS did not use the 90-day process to this extent. DCPS plans to revise its teacher evaluation process to more directly link teacher performance to student achievement. The proposed system includes a value-added component that would measure teachers, in part, on their ability to improve students’ standardized test scores over the course of a school year. This value-added measure would only apply to about 20 percent of the teacher workforce, since not all grades and subjects are tested. DCPS plans to use a less formal student achievement measure for teachers in nontested grades and subjects in the short term, but is working to increase the number of teachers for whom student achievement growth data are available. In addition, DCPS’s proposed evaluation system would add classroom observations by third-party observers, called master teachers, who would be knowledgeable about teaching the relevant subject matter and grade level, to supplement school administrators’ observations of teachers. To solicit input on the proposed evaluation system, the Chancellor held a series of sessions in spring 2009 with teachers, teacher coaches, and other school staff, and engaged the Washington Teachers’ Union. DCPS officials told us that the feedback was generally positive and that teachers found the proposed evaluation system to be fair, transparent, and an improvement over the current evaluation. However, some teachers were concerned about using students’ test scores as part of the evaluation. For the 2007-2008 school year, DCPS revised the principal evaluation system, which holds principals accountable for improvements in students’ standardized test scores and achieving other standards. DCPS will be able to use this evaluation system to determine if principals performed better during the 2008-2009 school year than in 2007-2008. In addition to the workforce replacement strategy, DCPS changed the way in which it develops its teacher workforce. DCPS began placing teacher coaches in schools to help teachers increase student achievement at their workplaces. Previously, DCPS’s teacher training was not systematic or aligned with the school district’s goals. For the 2008-2009 school year, DCPS hired about 150 teacher coaches to improve teachers’ skills in delivering reading and math instruction and boost student test scores. DCPS officials told us their decision to implement school-based teacher coaches was based on research demonstrating gains in student achievement as a result of teacher coaches collaborating with teachers to improve instruction. For the 2008-2009 school year, teacher coaches focused on helping new teachers and teachers with students in grades 3 through 10 in reading and math instruction. For example, teacher coaches, at the direction of principals, assisted teachers with interpreting student test scores, planning lessons, and using their classroom time constructively. DCPS is planning for teacher coaches to work with teachers in all grades and subjects for the 2009-2010 school year. Late hiring of teacher coaches, however, affected the implementation of the professional development plan for the 2008-2009 school year. DCPS officials told us they made the decision to hire teacher coaches after their review of school restructuring plans in June 2008. DCPS officials told us that, as a result of this late decision, they were unable to adequately recruit a sufficient number of qualified staff to fill these positions. Specifically, qualified teacher coach applicants had accepted jobs elsewhere, since many school systems recruit staff from February through April. DCPS intended to staff about 170 teacher coaching positions, however, as DCPS began the 2008-2009 school year, about 20 percent of the coaching positions remained open (19 reading coach vacancies and 16 math coach vacancies). As of late January 2009, there were 157 teacher coaches working on-site in the District’s public schools, with 14 total vacancies. Each vacancy represents a school without the full support (either a reading coach or both a reading coach and a math coach) that DCPS wanted to provide. As a result, the ratio of teachers to coaches was higher than it would have been had the positions been filled. In addition, according to DCPS officials and Washington Teachers’ Union officials we interviewed, teacher coaches were often unclear on their responsibilities and how to work with teachers, and received some conflicting guidance from principals. For example, these officials told us that some principals did not assign teacher coaches to their intended position. At the beginning of the school year, some principals assigned coaches to cover classes for absent teachers or to evaluate teachers—a practice not allowed under union rules—meaning the coaches were not able to work with teachers. DCPS is also seeking to improve the quality of principals through the Principals Academy developed for the 2008-2009 school year. Consistent with DCPS’s belief that principals should be their schools’ instructional leaders, the academy’s goals include improving principals’ leadership skills, helping them interpret student test scores, and providing advice on how to use this information to improve their schools. The Principals Academy convenes monthly and also includes differentiated professional development workshops based on principals’ individual needs. The state superintendent plan is a “state-level” strategic plan that covers the District’s public schools (and public charter schools). This plan and DCPS’s strategic plan each contain elements GAO has identified as key to an effective plan, such as aligning short-term objectives to long-term goals in order to delineate how to attain those goals. While DCPS has recently increased efforts to involve stakeholders such as parents and the D.C. Council in key initiatives, past stakeholder involvement was inconsistent. DCPS has not yet developed a method for ensuring more consistent stakeholder involvement. The state superintendent’s office and the State Board of Education collaboratively developed the District’s state-level, 5-year strategic plan, and released it in October 2008. This state-level plan spans early childhood and kindergarten through grade 12 education (including public charter schools). The plan was developed with stakeholder involvement throughout the process. Officials from the state superintendent’s office told us they involved District officials, and stakeholders representing early childhood education, business, and higher education communities, as well as other stakeholders while drafting the plan. In particular, they told us they involved DCPS and the D.C. Deputy Mayor of Education’s Office in discussions of the plan. In addition, in September 2008, the state superintendent’s office held one public forum to solicit stakeholder input on the draft of the document, and accepted comments on the draft on their Web site. The office released a revised version of the plan within a month of the public forum. Stakeholder involvement in formulating strategic plans allows relevant stakeholders to share their views and concerns. In addition, it affords stakeholders a way to understand the rationale for certain decisions. Ultimately, stakeholder involvement can result in increasing stakeholder support, or ownership, of the strategic plan. The state-level plan details the state-level strategy for improving education in the District and delineates accountability measures for DCPS and the public charter schools. In addition, the state-level plan states the mission, vision, and goals of the agency. It includes three broad, long-term goals: to have all children ready for school, all schools ready to prepare students for success, and all District residents ready to be successful in the 21st century economy. Overall, the plan includes many key elements of an effective strategic plan such as the inclusion of objectives that delineate how the state superintendent’s office intends to attain each of its goals. The short-term objectives are supported by various strategies, objective measures, and performance targets. For example, one objective under the goal of having the District’s schools ready to prepare students for success is to ensure that all students receive rigorous instruction. This objective is broken down into objective measures, such as the percentage of elementary students scoring proficient or above on the state test. Further, the plan specifies annual performance targets for this objective for the years 2008 to 2013. See table 5 for more details on the elements of the state-level strategic plan. DCPS released the draft of its 5-year strategic plan in late October 2008. In contrast to the state-level plan which includes the public charter schools, the DCPS plan is specific to prekindergarten through grade 12 education at its 128 schools. DCPS officials told us they based the draft on the Master Education Plan, which the prior DCPS administration developed with stakeholder involvement, and that they sought additional stakeholder input through a series of town hall meetings. After releasing the draft, DCPS held three public forums in the following 3 weeks where attendees provided DCPS officials with feedback on the draft strategic plan. In May 2009, DCPS released the revised draft, which incorporated stakeholder feedback. The DCPS 5-year strategic plan outlines the organization’s vision and goals, and includes many elements of an effective strategic plan. For example the plan explains how DCPS’s six broad goals are interrelated and how they support the vision. (Table 5 lists the six DCPS goals). In addition, the DCPS plan describes the condition of DCPS prior to the reform effort, the progress made to date, and the steps needed to achieve the long-term goals. However, the DCPS plan does not systematically delineate measurable outcomes with clear time frames and does not always identify key external factors that could increase the risk that an initiative may fail. For example, several objectives are aimed at improving teacher quality; however, the plan lacks specific targets for measuring the expected magnitude of such an improvement. Without such targets, it will be difficult for the public to evaluate DCPS’s progress toward improving its teacher workforce. In addition, while the strategic plan discusses increased performance-based pay for teachers, it does not specify the cost or explicitly mention the reliance on outside funding streams to achieve the increases. Yet, the reliance on outside funding for the initial 5 years is a risk that is not within DCPS’s control. Table 6 contains some key elements of the state-level and DCPS’s strategic plans. Officials from the D.C. Deputy Mayor of Education’s office told us that as part of their office’s coordinating role, it ensured that DCPS and the state- level strategic plans were aligned. However, the office had no documentation showing its efforts to coordinate these plans, such as an alignment study. We found that the two plans were aligned in terms of long-term goals. For example, DCPS’s goals could support the state-level goal of having all schools ready. However, we could not evaluate whether more detailed, objective measures and performance targets were aligned because the DCPS strategic plan did not always include specific objective measures and performance targets. DCPS officials have several planned and ongoing efforts to involve stakeholders in planning, implementing, and evaluating various initiatives. Stakeholder involvement can be instrumental in these areas because stakeholders can bring different knowledge, points of view, and experiences to planning and implementing reform efforts. DCPS officials told us they have a variety of approaches to involve stakeholders, including parents, students, and community groups, as well as institutional stakeholders such as the D.C. Council. For example, DCPS officials told us they reach out to parents, students, and the public by holding monthly community forums, meeting with a group of high school student leaders and a parent advisory group, responding to e-mail, and conducting annual parent and student surveys to gauge the school system’s performance. DCPS introduced monthly community forums in July 2008. These forums were generally informational sessions on topics chosen by DCPS officials, and were followed by questions from the audience. In some cases, such as the three forums focused on the strategic plan, DCPS officials facilitated discussions to elicit feedback. DCPS officials told us their efforts to involve students in reform efforts included a student leadership group that met regarding student concerns, and which was credited by DCPS officials for changes in the school lunch program as well as substantial changes to the discipline policy. DCPS also involved other stakeholders, such as parent organizations and the Washington Teachers’ Union in its process of changing the discipline policy. In addition, DCPS officials cited the Chancellor’s response to e-mail communications as a form of stakeholder involvement. While such communications may have provided stakeholders with a means of connecting to the Chancellor, e-mail communications are generally not public and do not lead to public debate or discourse. In spring 2008, DCPS also conducted parent and student surveys to assess stakeholder satisfaction with DCPS schools. While DCPS officials told us they have completed the analysis of the parent survey, they have not yet released the results. Further, DCPS did not receive the student survey data until February 2009 due to complications with a vendor who was paid to collect these data. As a result of the delays, DCPS officials told us they have been unable to use student survey responses to inform decisions relevant to the 2008-2009 school year. However, officials said they will be able to use the information as a baseline for future surveys. However, such activities do not ensure systematic stakeholder input in planning, implementing, and monitoring key initiatives. During our review, DCPS officials told us that stakeholder involvement was important to their reform efforts and that DCPS was taking steps to increase stakeholder involvement. However in some cases, according to two DCPS officials, DCPS did not have a planning process in place to ensure systematic stakeholder involvement, and we found that DCPS implemented some key initiatives with limited stakeholder involvement. For example, key stakeholders, including D.C. Council members and parent groups, told us they were not given the opportunity to provide input to inform DCPS’s initial proposals regarding school closures and consolidations, although DCPS did hold numerous meetings after the initial proposal, before finalizing decisions. Similarly, stakeholders told us DCPS did not include them in deliberations and decisions about the establishment of prekindergarten to grade 8 models at some schools. Representatives from one community organization told us that some parents had concerns about the structure and academic setting at the prekindergarten to grade 8 schools, but did not have a venue to express those concerns before decisions about grade configurations were made. In addition, DCPS did not seek input from key stakeholders during the planning and early implementation of the new staffing model that placed art, music, and physical education teachers at schools and which fundamentally changed the way funding is allocated throughout DCPS. DCPS officials told us that they had not planned for the number of changes that were requested by principals. In particular, they told us that the vast majority of school principals requested changes to their initial staffing allocations. Stakeholders did not have a timely opportunity to raise concerns on the potential risks in implementing the staffing model, such as the uneven distribution of resources across schools and overspending at some schools. Stakeholders also said they were not given sufficient time to review the budget for the 2008-2009 school year or to understand the changes in the budget made after the school year began. DCPS officials told us the budget planning process for the 2010-2011 school year involved stakeholders extensively. In particular, DCPS invited the public to a preliminary budget meeting and also provided training on the budget process to some key stakeholders, such as school principals and community members. Lack of stakeholder involvement in such key decisions led stakeholders, including the D.C. Council and parents groups, to voice concerns that DCPS was not operating in a transparent manner or obtaining input from stakeholders with experience relevant to the District’s education system. Further, these stakeholders have questioned whether the impact of reform efforts will be compromised because of restricted stakeholder involvement. Stakeholders from other urban school districts we visited told us a lack of stakeholder involvement leads to less transparency as key decisions are made without public knowledge or discourse. In addition, the lack of stakeholder involvement can result in an erosion of support for ongoing reform efforts and poor decisions. For example, officials in Chicago and Boston said public stakeholder involvement was critical to community support for various initiatives, such as decisions on which schools to close. Officials and stakeholders in New York cited lack of stakeholder involvement in decisions that were eventually reversed or revised. For example, changes made to school bus routes without consulting parents meant several route changes were later reversed because they proved to be unworkable. DCPS and the state superintendent’s office have taken steps to improve accountability and performance of their offices. For example, both offices have started implementation of new individual employee performance management systems. While DCPS has taken steps to improve accountability and link its individual performance management system to organizational goals, it has not completed this process or used the results of surveys to improve central office operations. To increase accountability of its central office, DCPS developed an accountability system and an individual performance management system for central office departments and employees. The central office, which is responsible for providing academic and nonacademic supports to DCPS, had operated without such accountability systems prior to the recent reform efforts. For example, previously, performance evaluations were not conducted for most DCPS staff. As a result, central office employees were not held accountable for the quality of services they provided to support schools. To improve accountability for central office departments, DCPS developed departmental scorecards, as a part of its performance management system, to identify and assess performance expectations for each department. For example, the scorecard for the Office of Data and Accountability includes measures such as the number of users of the primary student data system. According to a DCPS official, these scorecards are discussed at weekly accountability meetings with the Chancellor to hold senior-level managers accountable for meeting performance expectations. For example, at the accountability meeting we attended, DCPS officials from the Office of Data and Accountability used scorecards to discuss their progress with collecting attendance data and setting up processes to strengthen the collection of these data. According to DCPS officials, some departmental leaders have established similar accountability meetings with their staff, although these are not required. In January 2008, DCPS implemented a new performance management system for employees. Performance management systems for employees are generally used to set individual expectations, rate and reward individual performance, and plan work. DCPS developed its new performance management system in an effort to improve support services to the schools by improving the accountability and performance of central office employees. In particular, in past school years, teachers complained about not getting paid on time and beginning the school year with inadequate supplies. DCPS’s performance management system was put in place, in part, to improve these functions in the central office. While DCPS developed and instituted a new performance management system, it did not fully align individual performance expectations and evaluations to organizational goals, which GAO has identified as a key practice of effective individual performance management systems. For example, while DCPS took important steps in developing and implementing its system, such as training department managers to set expectations and give feedback to employees, DCPS has not yet established a uniform policy for setting expectations. Further, DCPS has not yet instituted a system to track how and when such expectations are set. Instead, individual managers established processes specific to their office or department and, as a result, DCPS could not ensure that individual performance expectations were aligned to organizational goals as outlined in the DCPS 5-year strategic plan or in its annual performance plans. Without such alignment, employees may not be familiar with the overall organizational goals and their daily activities may not reflect these goals. An explicit alignment of daily activities with broader desired results helps individuals connect their daily activities and organizational goals and encourages individuals to focus on their roles and responsibilities to help achieve the broader goals. In addition, as we previously reported, DCPS developed individual performance evaluations in December 2007 as a part of its performance management system in order to assess central office employees’ performance. Such individual performance evaluations are used to rate central office employees on several core competencies twice a year. For example, employees are rated on how well they demonstrate a commitment to providing high-quality and timely customer service to both external and internal customers of District schools. Prior to our March 2008 testimony, DCPS officials told us that they intended to align the performance management system with organizational goals by January 2009, and DCPS has taken some steps to improve alignment. For example, DCPS officials told us they had better aligned their departmental scorecards to their 2009 annual performance plan. However, DCPS has not yet explicitly linked employee performance evaluations to the agency’s overall goals. DCPS officials told us they plan to link the individual performance evaluations with organizational goals in the summer of 2009 to ensure greater accountability in supporting schools. The state superintendent’s office also implemented a new performance management system, effective October 2008, to hold its employees accountable and improve the office’s performance. The office is converting to a single electronic management system to track and evaluate employee performance. This new system, scheduled to be fully operational by December 2009, will replace the two separate systems that had operated on different cycles. According to an official from this office, the new system is uniform, user friendly, and allows for an easier transfer of performance information from manager to employee. In addition, this system links individual employee evaluations to overall performance goals and the office’s strategic plan. Under this new evaluation system, each employee is given a position description, which includes responsibilities and duties linked to the overall goals, mission, and vision of the state superintendent’s office. Individual and agency expectations are defined in an annual performance meeting with the employee. The office is currently training supervisory employees on how to use the system before its full implementation in December 2009. In November 2007, DCPS conducted a survey of employees within District schools, including teachers and principals, to gauge satisfaction with District services, including central office services during the 2007-2008 school year. Personnel at the schools are key stakeholders in improving central office functions, and their feedback is important to help DCPS ensure resources are targeted to the highest priorities. The American Institutes for Research partnered with DCPS to administer the online survey of teachers, principals, aides, clerks, counselors, project directors/coordinators, related service providers, and other staff. They were asked to provide feedback on numerous topics, including the work environment, facilities and maintenance, professional development, and leadership, as well as central office services. With regard to central office services, the survey’s questions were focused on personnel services, budget and procurement services, district departments and support services, food and nutrition services, and technology and data. Of those staff that completed the survey, more were satisfied with their schools, such as their work environment and fellow staff members, than with the support system provided by the central office. For example, they were least satisfied with the central office’s ability to provide goods and services in a timely manner, compute paychecks accurately, and allot budgeted funds when needed. In addition, staff who completed the survey were least satisfied with facilities office’s responsiveness to requests for school repairs, saying they were not completed in a timely manner. DCPS officials told us the results of the survey were shared internally with different central office departments in 2008, and focus groups were formed within a month of the release of the survey results to develop specific action plans to address identified issues. However, DCPS officials were unable to provide us with specific examples of improvements made in central office operations as a result of the survey. Three of the eight principals we met with regarding the school consolidation process stated that they could not always access budgeted funds when needed. In addition, four of the eight principals noted that school repairs were not made in a timely manner. One principal told us his payroll was often inaccurate, and some teachers were not always paid on time. DCPS officials told us another staff survey will be administered in spring 2009. The challenge of reforming DCPS is daunting. NCLBA requires 100 percent proficiency by 2014 and the District’s students scored significantly lower than the District’s own proficiency targets for 2008 and below students in most other urban districts. In the past, support for reform efforts has waned as student achievement did not improve, as buildings deteriorated, and as new superintendents were ushered in every few years to address these problems. The need for rapid reform and results is acute and the District’s Mayor and his education team have taken bold steps—such as implementing various classroom-based initiatives, reorganizing schools, and replacing teachers and principals—to improve the learning environment of the District’s students and ultimately increase student achievement. However, DCPS lacks certain planning processes, such as communicating information to stakeholders in a timely manner and incorporating stakeholder feedback at key junctures, which would allow for a more transparent process. In addition, DCPS did not gauge its internal capacity prior to implementing certain key initiatives, which, if addressed in the future, could help ensure the sustainability of initiatives. Without these planning processes, an organization risks having to revamp initiatives, leading to delays and compromising the implementation of timely, critical work. While having these planning processes in place will not eliminate all implementation issues, it will help to identify and mitigate risks associated with implementing bold initiatives and identify needed changes in the early stages of the initiative. Furthermore, a lack of these planning processes can result in decisions that are made on an ad hoc basis with resources unevenly distributed as was the case with the District’s new staffing model. Ultimately, the lack of such processes while planning and implementing initiatives has impeded the success of some of DCPS’s initiatives and could impede the District’s continued success and progress in reforming its school system. Stakeholder consultation in planning and implementation efforts can help create a basic understanding of the competing demands that confront most agencies and the limited resources available to them. Stakeholders can then share their expertise and experience, and views on how these demands and resources can be balanced. Continuing to operate without a more formal mechanism—other than community forums or e-mails—for stakeholder involvement could diminish support for the reform efforts, undermine their sustainability, and ultimately compromise the potential gains in student achievement. As more initiatives are developed, the need to balance the expediency of the reform efforts with measures to increase sustainability, such as stakeholder involvement, is critical. In addition, since the Reform Act, the District has taken several steps to improve central office operations, such as providing more accountability at the departmental level and implementing a new individual performance management system. However, DCPS has not taken steps to align its performance management system, including its individual performance evaluations, to its organizational goals, which could result in a disparity between employees’ daily activities and services needed to support schools. By ensuring that employees are familiar with the organizational goals and that their daily activities reflect these goals, DCPS could improve central office accountability and support to schools. To help ensure the transparency, success, and sustainability of the District’s transformation of its public school system, we recommend that the Mayor direct DCPS to establish planning processes that include mechanisms to evaluate its internal capacity and communicate information to stakeholders and, when appropriate, incorporate their views. To strengthen the new individual performance management system and ensure greater accountability of central office employees in their role supporting schools, we recommend that the Mayor direct DCPS to link individual performance evaluations to the agency’s overall goals. We provided a draft of this report to DCPS, the Deputy Mayor of Education, and to the Office of the State Superintendent of Education for review and comment. These offices provided written comments on a draft of this report, which are reproduced in appendix I. They also provided technical comments, which we incorporated when appropriate. All three entities concurred with our recommendations. However, they expressed concern with the way in which we evaluated their reform efforts and the overall tone of the draft report. Specifically, District officials stated that we did not measure DCPS’s progress in terms of the condition of the school system prior to the reform efforts, but instead measured progress in terms of whether the ultimate goals of the reform efforts had been met. We disagree. We did not measure DCPS’s progress against “ultimate goals.” As is now reflected in the paragraph describing our approach to this study, we measured the progress of ongoing reform efforts by comparing DCPS’s progress to its own time frames for implementing various initiatives. In conducting our review, we spoke with numerous DCPS officials and repeatedly asked for documents and time frames in order to objectively gauge the District’s progress. In some cases, DCPS officials did not provide us with such documentation; however, we made a concerted effort to accurately identify current initiatives and related time frames. In addition, we measured completed initiatives against recognized standards. For example, we determined whether or not the DCPS and the state-level strategic plans contained elements that GAO has identified as key to an effective plan. In addition, we described the conditions that existed prior to the reform efforts in order to provide context to the steps DCPS has taken. For example, we noted that prior to the reform efforts, DCPS’s teacher training was not systematic or aligned with the school district’s goals and that DCPS is now offering on-site professional development to improve teacher skills. We also cited the lack of individual performance evaluations for central office employees prior to the reform efforts that DCPS has made to improve in this area. Furthermore, we made every effort to provide balance and objectivity in our findings. For example, some stakeholders, such as parents groups, union representatives, and the D.C. Council, told us that DCPS made key decisions without their involvement. We revisited this issue with DCPS officials and described several of their efforts to improve stakeholder involvement in the initial draft of our report. We visited four urban school districts with mayoral governance and conducted in-depth interviews to help us better understand the magnitude of the challenges that officials encountered while trying to reform their school systems. We also spoke with superintendents and officials from mayors’ offices in these districts about the key lessons they learned as they reformed their school systems, including the risks associated with not having systematic stakeholder involvement. Finally, the District’s education offices stated in their response that we characterized the state superintendent’s efforts as positive and those of DCPS more negatively. While drafting this report, we intentionally avoided any comparison between DCPS and the state superintendent’s office, as their tasks and challenges are dissimilar. After reviewing our draft, DCPS provided us with more information and documentation regarding efforts to involve stakeholders in the development of the October 2008 draft of the DCPS strategic plan and steps taken to introduce alignment of accountability measures to organizational goals. We made changes to our report to reflect the updated information. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the D.C. Mayor’s Office, relevant congressional committees, and other interested parties. Copies will also be made available upon request. In addition, the report will be available at no charge on GAO’s Web site http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7215 or ashbyc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. In addition to the contact named above, Elizabeth Morrison, Assistant Director; Nagla’a El-Hodiri, Analyst-in-Charge; Sheranda Campbell; Jeff Miller; and Vernette Shaw made significant contributions to this report in all aspects of the work. Susan Aschoff, Mark Bird, Timothy Case, Bryon Gordon, Jeffrey Heit, Janice Latimer, Jean McSween, Sandy Silzer, and Sarah Veale provided analytical assistance. Doreen Feldman and Sheila McCoy provided legal support and Lise Levie and Kimberly Siegal verified our findings. District of Columbia Public Schools: While Early Reform Efforts Tackle Critical Management Issues, a District-wide Strategic Education Plan Would Help Guide Long-Term Efforts. GAO-08-549T. Washington, D.C.: March 14, 2008. District of Columbia Opportunity Scholarship Program: Additional Policies and Procedures Would Improve Internal Controls and Program Operations. GAO-08-9. Washington, D.C.: November 1, 2007. No Child Left Behind Act: Education Should Clarify Guidance and Address Potential Compliance Issues for Schools in Corrective Action and Restructuring Status. GAO-07-1035. Washington, D.C.: September 5, 2007 Charter Schools: Oversight Practices in the District of Columbia. GAO-05-490. Washington, D.C.: May 19, 2005. Results-Oriented Cultures: Implementing Steps to Assist Mergers and Organizational Transformation. GAO-03-669. Washington, D.C.: July 2, 2003. Results-Oriented Cultures: Creating a Clear Linkage between Individual Performance and Organizational Success. GAO-03-488. Washington, D.C.: March 14, 2003. Agencies’ Strategic Plans Under GPRA: Key Questions to Facilitate Congressional Review (Version 1). GAO/GGD-10.1.16. Washington, D.C.: May 1997. Executive Guide: Effectively Implementing the Government Performance and Results Act. GAO/GGD-96-118. Washington, D.C.: June 1996.
In response to long-standing problems with student achievement and the management of the District of Columbia (D.C. or the District) public school system, the D.C. Council approved the Public Education Reform Amendment Act of 2007. This act made major changes to the governance of the D.C. public school system, giving the Mayor authority over public schools. This report follows a GAO testimony in March 2008 and focuses on the primary reform approaches the District has taken. This report examines the steps the District took to: (1) address student academic achievement; (2) strengthen the quality of teachers and principals; (3) develop long-term plans and involve stakeholders; and (4) improve accountability and performance of the D.C. public schools (DCPS) and the state superintendent's central offices. GAO reviewed documentation on District initiatives, and interviewed District education officials as well as representatives from the teachers' union, community organizations, and research institutions. GAO also conducted visits to four urban school districts with mayoral governance. Early efforts to improve student achievement at DCPS have focused on improving student performance, closing underutilized and reorganizing underperforming schools, and creating and enhancing data systems. During the first 2 years of its reform efforts, DCPS implemented many initiatives to improve overall student performance, such as classroom-based initiatives to improve basic skills of students. In addition, under the No Child Left Behind Act, DCPS restructured 22 schools before the fall of 2008, after the schools failed to meet academic targets for 6 consecutive years. Finally, DCPS and the state superintendent's office are developing new ways to monitor student achievement and school performance. Specifically, a longitudinal database is being developed that is intended to allow DCPS and other key users to access a broad array of data, including student test scores. DCPS is modifying its approach to many of these initiatives such as focusing on effective teaching as opposed to implementing disparate programs. DCPS has focused on improving the quality of its workforce by replacing teachers and principals and by providing professional development, but it has encountered challenges in effectively implementing these changes. After the 2007-2008 school year, about one-fifth of the teachers and one-third of the principals resigned, retired, or were terminated from DCPS. However, because DCPS did not have an effective way to evaluate teacher performance, officials are uncertain if the new staff improved the quality of its workforce. DCPS is currently working on a new teacher evaluation system. In addition, DCPS introduced professional development initiatives for teachers and principals. For example, it began placing teacher coaches at schools to support teachers at their work sites. However, late decisions to hire these teacher coaches led to inconsistent implementation of this initiative during the 2008-2009 school year. The state superintendent's office and DCPS each developed their 5-year strategic plans and involved stakeholders in developing these plans. The state superintendent plan and the DCPS draft strategic plan each contain many elements of effective plans, such as aligning short-term objectives to long-term goals. DCPS has recently increased its efforts to involve stakeholders in various initiatives; however, it has not always involved stakeholders in key decisions and initiatives. DCPS and the state superintendent's office have taken steps to improve accountability and performance. For example, both offices have started implementation of new individual employee performance management systems. However, while DCPS has taken some additional steps to improve accountability, it has not yet linked its employee expectations and performance evaluations to organizational goals to improve central office operations.
The National Flood Insurance Act of 1968 established NFIP as an alternative to providing direct assistance after floods. NFIP, which provides government-guaranteed flood insurance to homeowners and businesses, was intended to reduce the federal government’s escalating costs for repairing flood damage after disasters. FEMA, which is within the Department of Homeland Security (DHS), is responsible for the oversight and management of NFIP. Since NFIP’s inception, Congress has enacted several pieces of legislation to strengthen the program. The Flood Disaster Protection Act of 1973 made flood insurance mandatory for owners of properties in vulnerable areas who had mortgages from federally regulated lenders and provided additional incentives for communities to join the program. The National Flood Insurance Reform Act of 1994 strengthened the mandatory purchase requirements for owners of properties located in special flood hazard areas (SFHA) with mortgages from federally regulated lenders. Finally, the Bunning-Bereuter-Blumenauer Flood Insurance Reform Act of 2004 authorized grant programs to mitigate properties that experienced repetitive flooding losses. Owners of these repetitive loss properties who do not mitigate may face higher premiums. To participate in NFIP, communities agree to enforce regulations for land use and new construction in high-risk flood zones and to adopt and enforce state and community floodplain management regulations to reduce future flood damage. Currently, more than 20,000 communities participate in NFIP. NFIP has mapped flood risks across the country, assigning flood zone designations based on risk levels, and these designations are a factor in determining premium rates. NFIP offers two types of flood insurance premiums: subsidized and full risk. The National Flood Insurance Act of 1968 authorizes NFIP to offer subsidized premiums to owners of certain properties. These subsidized premium rates, which represent about 40 percent to 45 percent of the cost of covering the full risk of flood damage to the properties, apply to about 22 percent of all NFIP policies. To help reduce or eliminate the long-term risk of flood damage to buildings and other structures insured by NFIP, FEMA has used a variety of mitigation efforts, such as elevation, relocation, and demolition. Despite these efforts, the inventories of repetitive loss properties—generally, as defined by FEMA, those that have had two or more flood insurance claims payments of $1,000 or more over 10 years— and policies with subsidized premium rates have continued to grow. In response to the magnitude and severity of the losses from the 2005 hurricanes, Congress increased NFIP’s borrowing authority from Treasury to about $20.8 billion. We have previously identified four public policy goals for evaluating the federal role in providing natural catastrophe insurance: charging premium rates that fully reflect actual risks, limiting costs to taxpayers before and after a disaster, encouraging broad participation in natural catastrophe insurance encouraging private markets to provide natural catastrophe insurance. Taking action to achieve these goals would benefit both NFIP and the taxpayers who fund the program but would require trade-offs. I will discuss the key areas that need to be addressed, actions that can be taken to help achieve these goals, and the trade-offs that would be required. As I have noted, NFIP currently does not charge all program participants rates that reflect the full risk of flooding to their properties. First, the act requires FEMA to charge many policyholders less than full-risk rates to encourage program participation. While the percentage of subsidized properties was expected to decline as new construction replaced subsidized properties, today nearly one out of four NFIP policies is based on a subsidized rate. Second, FEMA may “grandfather” properties that are already in the program when new flood maps place them in higher-risk zones, allowing some property owners to pay premium rates that apply to the previous lower-risk zone. FEMA officials told us they made the decision to allow grandfathering because of external pressure to reduce the effects of rate increases, and considerations of equity, ease of administration, and the goals of promoting floodplain management. Similarly, FEMA recently introduced a new rating option called the Preferred Risk Policy Eligibility Extension that in effect equals a temporary grandfathering of premium rates. While these policies typically would have to be converted to more expensive policies when they were renewed after a new flood map came into effect, FEMA has extended eligibility for these lower rates. Finally, we have also raised questions about whether NFIP’s full-risk rates reflect actual flood risks. Because many premium rates charged by NFIP do not reflect the full risk of loss, the program is less likely to be able to pay claims in years with catastrophic losses, as occurred in 2005, and may need to borrow from Treasury to pay claims in those years. Increasing premium rates to fully reflect the risk of loss—including the risk of catastrophic loss—would generally require reducing or eliminating subsidized and grandfathered rates and offers several advantages. Specifically, increasing rates could: result in premium rates that more fully reflected the actual risk of loss; decrease costs for taxpayers by reducing costs associated with postdisaster borrowing to pay claims; and encourage private market participation, because the rates would more closely approximate those that would be charged by private insurers. However, eliminating subsidized and grandfathered rates and increasing rates overall would increase costs to some homeowners, who might then cancel their flood policies or elect not to buy them at all. According to FEMA, subsidized premium rates are generally 40 percent to 45 percent of rates that would reflect the full risk of loss. For example, the projected average annual subsidized premium was $1,121 as of October 2010, discounted from the $2,500 to $2,800 that FEMA said would be required to cover the full risk of loss. In a 2009 report, we also analyzed the possibility of creating a catastrophic loss fund within NFIP (one way to help pay for catastrophic losses). Our analysis found that in order to create a fund equal to 1 percent of NFIP’s total exposure by 2020, the average subsidized premium—which typically is in one of the highest-risk zones—would need to increase from $840 to around $2,696, while the average full-risk premium would increase from around $358 to $1,149. Such steep increases could reduce participation, either because homeowners could no longer afford their policies or simply deemed them too costly, and increase taxpayer costs for postdisaster assistance to property owners who no longer had flood insurance. However, a variety of actions could be taken to mitigate these disadvantages. For example, subsidized rates could be phased out over time or not transferred with the property when it is sold. Moreover, as we noted in our past work, targeted assistance could be offered to those most in need to help them pay increased NFIP premiums. This assistance could take several forms, including direct assistance through NFIP, tax credits, or grants. In addition, to the extent that those who might forgo coverage were actually required to purchase it, additional actions could be taken to better ensure that they purchased policies. According to the RAND Corporation, in SFHAs, where property owners with loans from federally insured or regulated lenders are required to purchase flood insurance, as few as 50 percent of the properties had flood insurance in 2006. In order to reduce expenses to taxpayers that can result when NFIP borrows from Treasury, NFIP needs to be able to generate enough in premiums to pay its claims, even in years with catastrophic losses—a goal that is closely tied to that of eliminating subsidies and other reduced rates. Since the program’s inception, NFIP premiums have come close to covering claims in average loss years but not in years of catastrophic flooding, particularly 2005. Unlike private insurance companies, NFIP does not purchase reinsurance to cover catastrophic losses. As a result, NFIP has funded such losses after the fact by borrowing from Treasury. As we have seen, such borrowing exposes taxpayers to the risk of loss. NFIP still owes approximately $17.8 billion of the amount it borrowed from Treasury for losses incurred during the 2005 hurricane season. The high cost of servicing this debt means it may never be repaid, could in fact increase, and will continue to affect the program’s solvency and be a burden to taxpayers. Another way to limit costs to taxpayers is to decrease the risk of losses by undertaking mitigation efforts that could reduce the extent of damage from flooding. FEMA has taken steps to help homeowners and communities mitigate properties by making improvements designed to reduce flood damage—for example, elevation, relocation, and demolition. As we have reported, from fiscal year 1997 through fiscal year 2007, nearly 30,000 properties were mitigated using FEMA funds. Increasing mitigation efforts could further reduce flood damage to properties and communities, helping to put NFIP on a firmer financial footing and reducing taxpayers’ exposure. FEMA has made particular efforts to address the issue of repetitive loss properties through mitigation. These properties account for just 1 percent of NFIP’s insured properties but are responsible for 25 percent to 30 percent of claims. Despite FEMA’s efforts, the number of repetitive loss properties increased from 76,202 in 1997 to 132,100 in March 2011, or by about 73 percent. FEMA also has some authority to raise premium rates for property owners who refuse mitigation offers in connection with the Severe Repetitive Loss Pilot Grant Program. In these situations, FEMA can initially increase premiums to up to 150 percent of their current amount and may raise them again (by up to the same amount) on properties that incur a claim of more than $1,500. However, FEMA cannot increase premiums on property owners who pay the full-risk rate but refuse a mitigation offer, and in no case can rate increases exceed the full- risk rate for the structure. In addition, FEMA is not allowed to discontinue coverage for those who refuse mitigation offers. As a result, FEMA is limited in its ability to compel owners of repetitive loss properties to undertake flood mitigation efforts. Mitigation offers significant advantages. As I have noted, mitigated properties are less likely to be at a high risk for flood damage, making it easier for NFIP to charge them full-risk rates that cover actual losses. Allowing NFIP to deny coverage to owners of repetitive loss properties who refused to undertake mitigation efforts could further reduce costs to the program and ultimately to taxpayers. One disadvantage of increased mitigation efforts is that they can impose up-front costs on homeowners and communities required to undertake them and could raise taxpayers’ costs if the federal government elected to provide additional mitigation assistance. Those costs could increase still further if property owners who were dropped from the program for refusing to mitigate later-received federal postdisaster assistance. These trade-offs are not insignificant, although certain actions could be taken to reduce them. For example, federal assistance such as low-cost loans, grants, or tax credits could be provided to help property owners pay for the up-front costs of mitigation efforts. Any reform efforts could explore ways to improve mitigation efforts to help ensure maximum effectiveness. For example, FEMA has three separate flood mitigation programs. Having multiple programs may not be the most cost-efficient and effective way to promote mitigation and may unnecessarily complicate mitigation efforts. Increasing participation in NFIP, and thus the size of the risk pool, would help ensure that losses from flood damage did not become the responsibility of the taxpayer. Participation rates have been estimated to be as low as 50 percent in SFHAs, where property owners with loans from federally insured and regulated lenders are required to purchase flood insurance, and participation in lower-risk areas is significantly lower. For example, participation rates outside of SFHAs have been found to be as low as 1 percent, leaving significant room to increase participation. Expanding participation in NFIP would have a number of advantages. As a growing number of participants shared the risks of flooding, premium rates could be lower than they would be with fewer participants. Currently, NFIP must take all applicants for flood insurance, unlike private insurers, and thus is limited in its ability to manage its risk exposure. To the extent that properties added to the program were in geographic areas where participation had historically been low and in low- and medium-risk areas, the increased diversity could lower rates as the overall risk to the program decreased. Further, increased program participation could reduce taxpayer costs by reducing the number of property owners who might draw on federally funded postdisaster assistance. However, efforts to expand participation in NFIP would have to be carefully implemented, for several reasons. First, as we have noted, NFIP cannot reject applicants on the basis of risk. As a result, if participation increased only in SFHAs, the program could see its concentration of high- risk properties grow significantly and face the prospect of more severe losses. Second, a similar scenario could emerge if mandatory purchase requirements were expanded and newly covered properties were in communities that did not participate in NFIP and thus did not meet standards—such as building codes—that could reduce flood losses. As a result, some of the newly enrolled properties might be eligible for subsidized premium rates or, because of restrictions on how much FEMA can charge in premiums, might not pay rates that covered the actual risk of flooding. Finally, historically FEMA has attempted to encourage participation by charging lower rates; however, doing so results in rates that do not fully reflect the risks of flooding and exposes taxpayers to increased risk. Moderating the challenges associated with expanding participation could take a variety of forms. Newly added properties could be required to pay full-risk rates, and low-income property owners could be offered some type of assistance to help them pay their premiums. Outreach efforts would need to include areas with low and moderate flood risks to help ensure that the risk pool remained diversified. For example, FEMA’s goals for NFIP include increasing penetration in low-risk flood zones, among homeowners without federally related mortgages in all zones, and in geographic areas with repetitive losses and low penetration rates. Currently, the private market provides only a limited amount of flood insurance coverage. In 2009, we reported that while aggregate information was not available on the precise size of the private flood insurance market, it was considered relatively small. The 2006 RAND study estimated that 180,000 to 260,000 insurance policies for both primary and gap coverage were in effect. We also reported that private flood insurance policies are generally purchased in conjunction with NFIP policies, with the NFIP policy covering the deductible on the private policy. Finally, we reported that NFIP premiums were generally less expensive than premiums for private flood insurance for similar coverage. For example, one insurer told us that for a specified amount of coverage for flood damage to a structure, an NFIP policy might be as low as $500, while a private policy might be as high as $900. Similar coverage for flood damage to contents might be $350 for an NFIP policy but around $600 for a private policy. Given the limited nature of private sector participation, encouraging private market participation could transfer some of the federal government’s risk exposure to the private markets and away from taxpayers. However, identifying ways to achieve that end has generally been elusive. In 2007, we evaluated the trade-offs of having a mandatory all-perils policies that would include flood risks. For example, it would alleviate uncertainty about the types of natural events homeowners insurance covered, such as those that emerged following Hurricane Katrina. However, at the time the industry was generally opposed to an all- perils policy because of the large potential losses a mandatory policy would entail. Increased private market participation is also not without potential disadvantages. First, if the private markets provide coverage for only the lowest-risk properties currently in NFIP, the percentage of high-risk properties in the program would increase. This scenario could result in higher rates as the amount needed to cover the full risk of flooding increased. Without higher rates, however, the federal government would face further exposure to loss. Second, private insurers, who are able to charge according to risk, would likely charge higher rates than NFIP has been charging unless they received support from the federal government. As we have seen, such increases could create affordability concerns for low-income policyholders. Strategies to help mitigate these disadvantages could include requiring private market coverage for all property owners— not just those in high-risk areas—and, as described earlier, providing targeted assistance to help low-income property owners pay for their flood coverage. In addition, Congress could provide options to private insurers to help lower the cost of such coverage, including tax incentives or federal reinsurance. As Congress weighs NFIP’s various financial challenges in its efforts to reform the program, it must also consider a number of operational and management issues that may limit efforts to meet program goals and impair NFIP’s stability. For the past 35 years, we have highlighted challenges with NFIP and its administration and operations. For example, most recently we have identified a number of issues impairing the program’s effectiveness in areas that include the reasonableness of payments to Write-Your-Own (WYO) insurers, the adequacy of financial controls over the WYO program, and the adequacy of oversight of non- WYO contractors. In our report, which reviews FEMA’s management of NFIP, we addressed, among other things, (1) the extent to which FEMA’s management practices affect the agency’s ability to meet NFIP’s mission and (2) lessons to be learned from the cancellation of FEMA’s most recent attempt to modernize NFIP’s flood insurance policy and claims processing system. We found that FEMA faces significant management challenges in areas that affect its administration of NFIP. First, FEMA has not finalized strategic guidance and direction for NFIP and therefore lacks goals and objectives for the program and the necessary starting point for developing performance measures that would assess the program’s effectiveness. Second, FEMA faces a number of human capital challenges related to turnover, hiring, and tracking the many contractors that play a key role in NFIP. Further, FEMA lacks a plan that would help ensure consistent day- to-day operations when it deploys staff to respond to federal disasters. Third, collaboration between program and support offices that contribute to administering NFIP has at times been ineffective, leading to challenges in effectively carrying out some key functions, including information technology, acquisition, and financial management. Finally, FEMA does not have a comprehensive set of processes and systems to guide its operations. Specifically, it lacks an updated records management policy, an electronic document management system, procedures to effectively manage unliquidated obligations, and a fully developed and implemented documentation of its business processes. FEMA has begun taking steps to improve its acquisition management and document some of its business processes, but the results of its efforts remain to be seen. Unless it takes further steps to address these management challenges, FEMA will be limited in its ability to manage NFIP’s operations or better ensure program effectiveness. In our report we made eight recommendations addressing these issues. DHS agreed with these recommendations and FEMA has begun to take steps to begin addressing some of them. For example, FEMA has begun developing a strategy for the administration of its mitigation and insurance programs, conducting a workforce assessment, holding outreach sessions between program and support offices to improve collaboration, and developing training and certification programs for acquisition management. We also found that the cancelled development of the Next Generation Flood Insurance Management System (NextGen), FEMA’s latest attempt to modernize NFIP’s insurance policy and claims management system, illustrated weaknesses in NFIP’s acquisition management activities. Despite investing roughly 7 years and $40 million, FEMA ultimately canceled the effort in November 2009 because it failed to meet user expectations, forcing the agency to continue relying on a 30-year-old system that does not fully support NFIP’s mission needs and is costly to maintain and operate. A number of acquisition management weaknesses led to NextGen’s failure and cancellation. Specifically, business and functional requirements were not sufficiently defined; system users did not actively participate in determining the requirements for the development of system prototypes or in pilot testing activities; test planning and project risks were not adequately managed; and project management office staffing was limited. As FEMA begins a new effort to modernize the existing legacy system, it plans to apply lessons learned from its NextGen experience. While FEMA has begun implementing some changes to its acquisition management practices, it remains to be seen if they will help FEMA avoid some of the problems that led to NextGen’s failure. Unless it develops appropriate acquisitions processes and applies lessons learned from the NextGen failure, FEMA will be unable to develop an effective policies and claims processing system for NFIP. DHS agreed with our recommendations that DHS provide regular oversight of FEMA’s next attempt to modernize the system and help ensure FEMA applies lessons learned. Congressional action is needed to increase the financial stability of NFIP and limit taxpayer exposure. GAO previously identified four public policy goals that can provide a framework for crafting or evaluating proposals to reform NFIP. First, any congressional reform effort should include measures for charging premium rates that accurately reflect the risk of loss, including catastrophic losses. Meeting this goal would require changing the law governing NFIP to reduce or eliminate subsidized rates, limits on annual rate increases, and grandfathered or other rates that do not fully reflect the risk of loss. In taking such a step, Congress may choose to provide assistance to certain property owners, and should consider providing appropriate authorization and funding of such incentives to ensure transparency. Second, because of the potentially high costs of individual and community mitigation efforts, which can reduce the frequency and extent of flood damage, Congress may need to provide funding or access to funds for such efforts and consider ways to improve the efficiency of existing mitigation programs. Moreover, if Congress wished to allow NFIP to deny coverage to owners of properties with repetitive losses who refuse mitigation efforts, it would need to give FEMA appropriate authority. Third, Congress could encourage FEMA to continue to increase participation in the program by expanding targeted outreach efforts and limiting postdisaster assistance to those individuals who choose not to mitigate in moderate- and high-risk areas. And finally, to address the goal of encouraging private sector participation, Congress could encourage FEMA to explore private sector alternatives to providing flood insurance or for sharing insurance risks, provided such efforts do not increase taxpayers’ exposure. For its part, FEMA needs to take action to address a number of fundamental operational and managerial issues that also threaten the stability of NFIP and have contributed to its remaining on GAO’s high-risk list. These include improving its strategic planning, human capital planning, intra-agency collaboration, records management, acquisition management, and information technology. While FEMA continues to make some progress in some areas, fully addressing these issues is vital to its long-term operational efficiency and financial stability. Chairman Johnson and Ranking Member Shelby, this concludes my prepared statement. I would be pleased to respond to any of the questions you or other members of the Committee may have at this time. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. For further information about this testimony, please contact Orice Williams Brown at (202) 512-8678 or williamso@gao.gov. This statement was prepared under the direction of Patrick Ward. Key contributors were Christopher Forys, Nima Patel Edwards, Emily Chalmers, and Tania Calhoun. FEMA: Action Needed to Improve Administration of the National Flood Insurance Program. GAO-11-297. Washington, D.C.: June 9, 2011. Flood Insurance: Public Policy Goals Provide A Framework for Reform. GAO-11-429T. Washington, D.C.: March 11, 2011. FEMA Flood Maps: Some Standards and Processes in Place to Promote Map Accuracy and Outreach, but Opportunities Exist to Address Implementation Challenges. GAO-11-17. Washington, D.C.: December 2, 2010. National Flood Insurance Program: Continued Actions Needed to Address Financial and Operational Issues. GAO-10-1063T. Washington, D.C.: September 22, 2010. National Flood Insurance Program: Continued Actions Needed to Address Financial and Operational Issues. GAO-10-631T. Washington, D.C.: April 21, 2010. Financial Management: Improvements Needed in National Flood Insurance Program’s Financial Controls and Oversight. GAO-10-66. Washington, D.C.: December 22, 2009. Flood Insurance: Opportunities Exist to Improve Oversight of the WYO Program. GAO-09-455. Washington, D.C.: August 21, 2009. Information on Proposed Changes to the National Flood Insurance Program. GAO-09-420R. Washington, D.C.: February 27, 2009. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 2009. Flood Insurance: Options for Addressing the Financial Impact of Subsidized Premium Rates on the National Flood Insurance Program. GAO-09-20. Washington, D.C.: November 14, 2008. Flood Insurance: FEMA’s Rate-Setting Process Warrants Attention. GAO-09-12. Washington, D.C.: October 31, 2008. National Flood Insurance Program: Financial Challenges Underscore Need for Improved Oversight of Mitigation Programs and Key Contracts. GAO-08-437. Washington, D.C.: June 16, 2008. Natural Catastrophe Insurance: Analysis of a Proposed Combined Federal Flood and Wind Insurance Program. GAO-08-504. Washington, D.C.: April 25, 2008. National Flood Insurance Program: Greater Transparency and Oversight of Wind and Flood Damage Determinations Are Needed. GAO-08-28. Washington, D.C.: December 28, 2007. National Disasters: Public Policy Options for Changing the Federal Role in Natural Catastrophe Insurance. GAO-08-7. Washington, D.C.: November 26, 2007. Federal Emergency Management Agency: Ongoing Challenges Facing the National Flood Insurance Program. GAO-08-118T. Washington, D.C.: October 2, 2007. National Flood Insurance Program: FEMA’s Management and Oversight of Payments for Insurance Company Services Should Be Improved. GAO-07-1078. Washington, D.C.: September 5, 2007. National Flood Insurance Program: Preliminary Views on FEMA’s Ability to Ensure Accurate Payments on Hurricane-Damaged Properties. GAO-07-991T. Washington, D.C.: June 12, 2007. Coastal Barrier Resources System: Status of Development That Has Occurred and Financial Assistance Provided by Federal Agencies. GAO-07-356. Washington, D.C.: March 19, 2007. Budget Issues: FEMA Needs Adequate Data, Plans, and Systems to Effectively Manage Resources for Day-to-Day Operations. GAO-07-139. Washington, D.C.: January 19, 2007. National Flood Insurance Program: New Processes Aided Hurricane Katrina Claims Handling, but FEMA’s Oversight Should Be Improved. GAO-07-169. Washington, D.C.: December 15, 2006. GAO’S High-Risk Program. GAO-06-497T. Washington, D.C.: March 15, 2006. Federal Emergency Management Agency: Challenges for the National Flood Insurance Program. GAO-06-335T. Washington, D.C.: January 25, 2006. Federal Emergency Management Agency: Improvements Needed to Enhance Oversight and Management of the National Flood Insurance Program. GAO-06-119. Washington, D.C.: October 18, 2005. Determining Performance and Accountability Challenges and High Risks. GAO-01-159SP. Washington, D.C.: November 2000. Standards for Internal Control in the Federal Government. GAO/AIMD-00-21.3.1. Washington, D.C.: November 1999. Budget Issues: Budgeting for Federal Insurance Programs. GAO/T-AIMD-98-147. Washington, D.C.: April 23, 1998. Budget Issues: Budgeting for Federal Insurance Programs. GAO/AIMD-97-16. Washington, D.C.: September 30, 1997. National Flood Insurance Program: Major Changes Needed If It Is To Operate Without A Federal Subsidy. GAO/RCED-83-53. Washington, D.C.: January 3, 1983. Formidable Administrative Problems Challenge Achieving National Flood Insurance Program Objectives. RED-76-94. Washington, D.C.: April 22, 1976. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The National Flood Insurance Program (NFIP) has been on GAO's high-risk list since 2006, when the program had to borrow from the U.S. Treasury to cover losses from the 2005 hurricanes. The outstanding debt is $17.8 billion as of June 2011. This sizeable debt, plus operational and management challenges that GAO has identified at the Federal Emergency Management Agency (FEMA), which administers NFIP, have combined to keep the program on the high-risk list. NFIP's need to borrow to cover claims in years of catastrophic flooding has raised concerns about the program's long-term financial solvency. This testimony (1) discusses ways to place NFIP on a sounder financial footing in light of public policy goals for federal involvement in natural catastrophe insurance and (2) highlights operational and management challenges at FEMA that affect the program. In preparing this statement, GAO relied on its past work on NFIP, including a June 2011 report on FEMA's management of NFIP, which focused on its planning, policies, processes, and systems. The management review included areas such as strategic and human capital planning, acquisition management, and intra-agency collaboration. Congressional action is needed to increase the financial stability of NFIP and limit taxpayer exposure. GAO previously identified four public policy goals that can provide a framework for crafting or evaluating proposals to reform NFIP. These goals are: (1) charging premium rates that fully reflect risks, (2) limiting costs to taxpayers before and after a disaster, (3) encouraging broad participation in the program, and (4) encouraging private markets to provide flood insurance. Successfully reforming NFIP would require trade-offs among these often competing goals. For example, nearly one in four policyholders does not pay full-risk rates, and many pay a lower subsidized or "grandfathered" rate. Reducing or eliminating less than full-risk rates would decrease costs to taxpayers but substantially increase costs for many policyholders, some of whom might leave the program, potentially increasing postdisaster federal assistance. However, these trade-offs could be mitigated by providing assistance only to those who need it, limiting postdisaster assistance for flooding, and phasing in premium rates that fully reflect risks. Increasing mitigation efforts to reduce the probability and severity of flood damage would also reduce flood claims in the long term but would have significant up-front costs that might require federal assistance. One way to address this trade-off would be to better ensure that current mitigation programs are effective and efficient. Encouraging broad participation in the program could be achieved by expanding mandatory purchase requirements or increasing targeted outreach to help diversify the risk pool. Such efforts could help keep rates relatively low and reduce NFIP's exposure but would have to be effectively managed to help ensure that outreach efforts are broadly based. Encouraging private markets is the most difficult challenge because virtually no private market for flood insurance exists for most residential and commercial properties. FEMA's ongoing efforts to explore alternative structures may provide ideas that could be evaluated and considered. Several operational and management issues also limit FEMA's progress in addressing NFIP's challenges, and continued action by FEMA will be needed to help ensure the stability of the program. For example, in numerous past reports, GAO identified weaknesses in areas that include financial controls and oversight of private insurers and contractors, and made many recommendations to address them. While FEMA has made progress in addressing some areas, GAO's June 2011 report identified a number of management challenges facing the program, including strategic and human capital planning, records management, collaboration among offices, and financial and acquisition management. In this report, we also made a number of recommendations to address these challenges. FEMA agreed with the recommendations and discussed the steps being taken to address some of them. GAO has made numerous recommendations aimed at improving financial controls, oversight of private insurers and contractors, and FEMA's management of NFIP. DHS generally agreed with our recommendations.
The Medicare Part D benefit is provided through private organizations that offer one or more drug plans with different levels of premiums, deductibles, and cost sharing. Plan sponsors must offer the standard Part D benefit established under MMA or an actuarially equivalent benefit. The standard benefit includes an annual deductible, coverage up to a level of spending, a coverage gap—the period when beneficiaries pay all of the costs of their drugs—and catastrophic coverage above a specified out-of- pocket limit. Sponsors may also offer enhanced benefit plans that provide a lower deductible and coverage in the coverage gap in exchange for higher premiums. Certain low-income beneficiaries are eligible for subsidies to defray most of their out-of-pocket costs. Part D sponsors offer drug coverage either through stand-alone PDPs for those in traditional fee-for-service Medicare, or through Medicare Advantage prescription drug (MA-PD) plans for beneficiaries enrolled in Medicare’s managed care program. As of September 2007, CMS had contracts with 101 PDPs and 461 MA-PDs. The majority of Part D enrollees, about 71 percent, are in PDPs. PDP enrollment across contracts varies widely, ranging from fewer than 20 enrollees to more than 3.3 million enrollees, and is highly concentrated—the four largest contracts account for about 53 percent of total PDP enrollment in September 2007. For the drugs included on their formularies, Part D sponsors decide which drugs will have utilization management restrictions and which type of restriction they will apply. Utilization management restrictions may include prior authorization, quantity limits, and step therapy requirements. Sponsors may apply utilization management restrictions to prevent the overuse of expensive medications by requiring lower-tier drugs be tried first. The restrictions may also serve to ensure that proper dosages are dispensed, to protect against adverse drug interactions, and to control the use of medications with potential for abuse. Each sponsor has discretion to decide under which circumstances it will apply utilization restrictions. Research conducted for The Kaiser Family Foundation has shown that sponsors’ use of formularies and utilization management restrictions varies significantly. The study reported that the 2007 formularies of the 10 largest PDPs differed in their coverage of a sample of commonly used drugs and their use of utilization management restrictions on those drugs. Four PDPs included on their formulary all of the 152 sampled drugs commonly used by Medicare beneficiaries. Among the remaining 6 PDPs, 1 covered between 90 and 100 percent, and 5 covered between 70 and 80 percent of the sampled drugs. The authors also found that the 10 PDPs placed prior authorization requirements on between 3 and 14 of the 152 sampled drugs. While 3 of the 10 PDPs did not have a step therapy requirement on any of the 152 drugs, 2 PDPs had the requirement on 8 of the drugs. The number of the 152 sampled drugs with quantity limits ranged from 3 to 62. Beneficiaries can use the coverage determination and appeals processes to challenge a utilization management restriction on a drug on the sponsor’s formulary or to request coverage for a Part D drug that is not on the sponsor’s formulary. Table 1 describes types of requests. Study sponsors have designed their coverage determination processes to allow for prompt decision making within CMS-required time frames. They obtain patient information needed to make their decisions using drug- specific coverage determination request forms and enter this information into a computer for analysis of whether coverage criteria have been met. When coverage requests cannot be approved by technical staff, they are decided by clinical staff. Sponsors apply drug-specific coverage criteria that incorporate the requirements established by MMA and CMS as well as factors that they have discretion to apply, such as evidence of trial and failure of lower- cost drugs. In the sample of coverage determination case files we reviewed at the seven study sponsors, coverage of the requested drug was approved in approximately two-thirds of the cases. The sponsors we studied developed coverage determination processes designed to produce decisions within the CMS-required time frames—72 hours for standard requests and 24 hours for expedited requests. To collect the patient information needed to make coverage determination decisions, study sponsors generally rely on drug-specific request forms. These forms typically ask a series of questions based on the sponsor’s established coverage criteria for a given drug. Prescribing physicians are asked to use these forms to submit clinical information about a beneficiary that generally includes the diagnosis associated with the requested drug, and may include the beneficiary’s other medical conditions and drug history. For instance, to process a coverage determination request for the osteoporosis drug Forteo, a sponsor may ask whether the beneficiary has a diagnosis of osteoporosis, has multiple risk factors for fractures, and has tried and failed other specific osteoporosis therapies. Some study sponsors had dozens of different forms for drugs in different classes, with a varying number of questions. For example, one sponsor asked 5 questions for the sleep medications Ambien and Lunesta and 23 questions for the injectible drug Pegasys, used to treat hepatitis. If a physician makes a coverage determination request over the phone, sponsor staff have on-line access to the drug-specific questions they need to ask. With the information submitted by the prescribing physician, study sponsors used computer algorithms—a series of questions with yes/no answers—in order to make expeditious, consistent decisions. Technical staff, such as pharmacy technicians or call center representatives, enter the patient information into the computer system. The algorithms are used to assess the information to determine whether the beneficiary meets the sponsor’s coverage criteria for the specific drug in question. This process generates rapid, consistent decisions if sponsors receive sufficient information from prescribing physicians. When the technical staff cannot approve the drug, coverage determination requests are forwarded for a decision by clinical staff with more expertise, such as staff pharmacists. One sponsor reported that, on average, a standard coverage determination involving prior authorization takes about 40 minutes after the prescribing physician provides the needed information. However, the pressure to make a coverage determination within the CMS- mandated time frames increased the likelihood that sponsors may deny requests when complete information is not at hand or can not be obtained quickly. Two study sponsors told us that if they were not successful in getting information they requested, they made decisions based on the information they had at the time. For example, if physicians are asked to provide a patient’s medical records as part of their request but do not provide that information quickly, the sponsor may deny the request in order to meet the required time frame. Among the coverage determination case files we reviewed at the study sponsors, the sponsor requested additional information from the physician in about 13 percent of the cases and about 30 percent of the denials were for lack of requested medical information. One sponsor noted that there would probably be fewer denials at the coverage determination stage if sponsors had more time to acquire needed information. Sponsors apply a range of coverage criteria to evaluate requests for drugs with restrictions. Their criteria are used, in part, to determine whether a requested drug can be covered under Part D program rules set by MMA or CMS. Sponsors consider a number of factors in reviewing a request, including the following: Should the drug be covered under another part of the Medicare program? There are an estimated 6,000 unique drug products that potentially could be covered under either Part B or Part D of the Medicare program. Which part of the Medicare program is the appropriate payer depends on factors such as the patient’s diagnosis, when the beneficiary is taking the drug, or the setting in which the drug is being administered. For instance, immunosuppressive drugs suppress the body’s immune response and are used to treat autoimmune diseases—diseases in which the body attacks its own tissues—and to prevent rejection of a transplanted organ. Immunosuppressives are covered by Part B when the physician prescribes them after a Medicare-covered organ transplant and by Part D for all other outpatient uses. Is the requested drug in a Part D-excluded drug class? Although sponsors generally can not cover drugs in 1 of 10 statutorily excluded drug categories, beneficiaries or prescribing physicians may request a coverage determination for a drug that is in an excluded drug category. For such coverage determinations, the physician must show that the drug is prescribed for a purpose that is not excluded under the law or that it has been mistakenly classified by the sponsor as excluded. For instance, medications for coughs and colds are generally excluded from Part D. However, CMS has issued guidance to plan sponsors that cough and cold medications are eligible to meet the definition of a Part D drug in clinically relevant situations. For example, if a physician prescribes a cough suppressant to a beneficiary because the beneficiary has osteoporosis and may break a bone if the cough is not controlled, then the cough suppressant would be considered a Part D-covered drug. Is the requested drug medically necessary? Part D sponsors must approve coverage when the requested drug at the requested dosage is medically necessary. In order to show medical necessity, the prescribing physician must provide a statement that the requested drug is medically necessary because (1) all of the covered Part D drugs on the sponsor’s formulary for treatment of the same condition would not be as effective for the beneficiary, would have adverse effects for the beneficiary, or both; (2) the prescription drug alternatives on the formulary have been ineffective in the past, are likely to be ineffective, or are likely to cause an adverse reaction for the beneficiary; or (3) the number of doses available under a quantity limit for a requested drug has been ineffective or is likely to be ineffective. In addition, sponsors are required to approve a tiering exception if they agree with the prescribing physician’s statement that treatment of the beneficiary’s condition using the preferred alternative drug would not be as effective for the beneficiary as the requested drug, would have adverse effects for the beneficiary, or both. Is the requested drug being prescribed for a medically accepted indication? Under Medicare Part D, a drug is considered to be prescribed for a medically accepted indication if the drug is FDA-approved for that use. Any off-label use—one not approved by FDA—is considered medically accepted only if it is supported by a citation in one of the three designated drug reference guides. Beneficiary advocates have argued that the coverage restrictions on those off-label drug uses not listed in the designated drug reference guides cause beneficiaries to be denied coverage for needed drugs, some of which beneficiaries had been previously taking successfully. For instance, a beneficiary without cancer may have a condition which causes severe pain. After trying several medications, the beneficiary may have less pain with the use of Actiq, a medication approved only for breakthrough pain in cancer patients. Under Part D, the beneficiary would be denied coverage for the drug, even if the beneficiary’s physician stated that the medication was medically necessary, because the drug was not prescribed for a medically accepted indication, and this use is not listed in one of the three drug reference guides. Beyond ensuring compliance with MMA and CMS coverage rules, sponsors have discretion to develop their own drug-specific coverage criteria. Sponsors in our study also considered the following factors. Has the beneficiary tried and failed on a generic or preferred alternative drug? To reduce costs, sponsors may require beneficiaries to try and fail on generic or preferred alternative drugs before approving coverage for higher-cost drugs. Sponsors told us, and CMS has affirmed, that beneficiaries generally can switch to a therapeutically equivalent drug without disruption to their care. Therefore, although a beneficiary has been stable on a particular drug for a period of time, sponsors may require the beneficiary to switch to a generic or preferred alternative drug. Has the physician conducted specific tests to confirm the beneficiary’s diagnosis or condition? Study sponsors sometimes also ask for information from specified tests or studies that document a patient’s diagnosis or condition. For instance, one sponsor told us that it requires genotype tests for hepatitis drugs because the length of time a patient should be on the drug is determined by the genotype. Is the beneficiary already stable on the requested drug? Sponsors may consider whether the beneficiary is stable on the requested drug when deciding whether to approve or reapprove coverage. Does the beneficiary have other medical conditions or take other medications that may contraindicate the use of the requested drug? For instance, one sponsor’s criteria for the drug Actiq—used to treat breakthrough cancer pain—stipulated that the enrollee must not have severe asthma or chronic obstructive pulmonary disease, which are contraindications to Actiq. This same sponsor’s criteria for the antidepressant Ensam noted that the medication should not be approved if the enrollee is taking other types of antidepressants, such as monoamine oxidose inhibitors or tricyclic antidepressants. Duration of the approval period depends upon the drug requested and on plan policies. In general, sponsors told us they approve coverage of a requested drug for either the duration of the year or a 12-month period. Some sponsors also approve requests for as long as the beneficiary remains enrolled in the plan in cases where the drug treats an illness that can last for the duration of a person’s life (such as multiple sclerosis). All sponsors said that certain drugs, such as those with a specified length of treatment for safety reasons, may be approved for shorter time periods. For example, some injectible drugs are approved for 24 weeks. If coverage criteria are not met, study sponsors’ denial letters generally included the reason for the decision. For instance, denial notices may state that the requested drug was not covered because the preferred alternative drug must be tried first. Some, but not all, sponsors that we visited sent notification letters to prescribing physicians that identified which preferred drug should be tried. The IRE told us that some sponsor denials are vague. For instance, sponsors may not do a good job of explaining which specific requirements have not been met. Study sponsors approved about 67 percent of the coverage determination requests among the October 2006 requests that we reviewed. Approval rates varied among sponsors, ranging from 57 percent to 76 percent. We also found that coverage determinations in MA-PD plans were more likely to be approved than coverage determinations in PDPs; the approval rate for MA-PD plans was 72 percent, compared to 63 percent for PDPs. Sponsors in our study approved standard requests more often than expedited requests. The approval rates for standard and expedited requests were 67 percent and 53 percent, respectively. We found that nearly all requests for coverage determinations were made by physicians on behalf of their patients. Approximately 94 percent of the coverage determinations in our case file review were requested by a physician or a physician’s office staff. At the coverage determination stage, we also found that only a small proportion of requests were expedited. Of the coverage determination case files we reviewed, just 4 percent of the requests were expedited. We found that the most commonly requested drug class and category combinations were, in order of decreasing frequency, (1) blood modifier agent/hematopoietic, (2) endocrine-metabolic agent/antidiabetic, (3) central nervous system agent/analgesic, (4) dermatological agent/antifungal, (5) gastrointestinal agent/antiulcer, (6) anti-infective agent/antifungal, and (7) musculoskeletal agent/antirheumatic. These seven drug class and category combinations accounted for about half of the requested drugs in the 421 cases we reviewed. At the individual drug level, the five most requested drugs—collectively accounting for about one-quarter of our sampled coverage determination requests—were Procrit, Lamisil, Byetta, Celebrex, and Omeprazole. The appeals process allows for individuals not involved in the previous case review to make better-informed decisions by considering additional supporting evidence. In making redeterminations—the first level of appeal—sponsor staff evaluate any corrected or augmented evidence to see if coverage criteria have been met. In conducting reconsiderations— the second level of appeal—IRE officials consider the information the sponsor reviewed, along with any additional support that may be available. In many cases, appeals result in new interpretations of whether the requested drug should be covered. CMS appeals data show that, from July 2006 through December 2006, the median approval rate across all Part D sponsors was 40 percent; from July 2006 through June 2007, appeals to the IRE received full or partial approval in 28 percent of cases. We found that, for some standard appeals, missing AOR documentation contributed to delays in study sponsor redetermination decisions and dismissals of IRE reconsideration cases. Some study sponsors have developed “workarounds” to eliminate the need for a completed AOR form. Appeals processes at both the study sponsors’ level and the IRE typically involve (1) reviewing more information than was available for the previous decision level and (2) different decision makers. In conducting redeterminations—the first level of appeal—sponsors typically receive corrected or augmented patient information that was not submitted within the allotted time frame for the coverage determination. For example, prescribing physicians may not have identified the beneficiary’s conditions with sufficient specificity or included a complete drug use history when making the coverage determination request; for redeterminations, physicians often provide new information on the reason for the requested drug and a list of drugs the beneficiary had previously tried but were found to be ineffective or not well tolerated. Physicians may forward laboratory test results or chart notes that sponsors had requested previously. In addition, our reviews of sponsors’ redetermination case files showed that physicians revise the statements they had provided originally to address issues raised in the sponsors’ coverage denial letters. To determine whether the sponsor’s drug-specific coverage criteria have been met, study sponsor staff reassess the submitted information, along with any additional support not previously considered. For redeterminations that involve requests for off-label uses of drugs, study sponsors said they make an effort to look for citations in one of the three Part D-designated drug reference guides to see if one of them supports use of the drug for the indication for which it was prescribed. In reviewing requests for dosage limit exceptions, in addition to considering a beneficiary’s medical record, study sponsors may also examine medical research literature for evidence not included in the reference guides. In addition, sponsors may discuss a case directly with the prescribing physician. We found that study sponsors contacted prescribing physicians to obtain additional information in 31 percent of the redetermination case files we reviewed. CMS requires that redetermination decisions be made by individuals not previously involved in reviewing the drug request. Study sponsors’ redetermination decision staff making clinical decisions consist largely of pharmacists or staff medical directors. If the staff pharmacist does not approve a decision, a medical director makes the final decision. CMS additionally requires that decisions concerning the medical necessity of the requested drug be made by a physician with expertise in the field of medicine appropriate to the condition being treated. Some of the study sponsors contract with external physicians or utilization review companies for this function. Along with the information in the sponsor case file, IRE staff review any new supporting information they receive or solicit from the prescribing physician as well as relevant medical literature. In making a reconsideration decision—the second level of appeal—the IRE is likely to have more information than did the sponsor at the first level of appeal. It not only has information from the sponsor’s case file, but also information in the physician’s letter or beneficiary correspondence that may be submitted with the reconsideration request. In addition, IRE staff told us that they contact the physician or beneficiary to obtain specific details about the beneficiary’s health or to clarify the information submitted, such as adverse effects the beneficiary has experienced or contraindications to the preferred formulary drugs. During its review, the IRE may also perform additional research in the drug reference guides on the reason the physician is prescribing a particular drug or dosage. For instance, IRE staff may be successful in researching the Part D-designated drug reference guides for a specific off-label drug use that a sponsor had not identified. As Medicare’s independent external appeals contractor, the IRE employs medical professionals subject to conflict-of-interest prohibitions, which bar them from having certain relationships with any health insurance utilization review company, provider network, or drug supply company. The IRE staff conducting most reconsiderations are predominantly physicians credentialed in various medical specialties. For example, according to IRE officials, appeals cases involving opioids are handled by pain management specialists because these cases need a specialty review. IRE officials also said that, when necessary, the IRE contracts with external specialists to review cases. Consideration of new evidence during the appeals process often leads to decisions that reverse the sponsors’ decisions. At the first level of appeal, CMS appeals data show that, from July 2006 through December 2006, the median approval rate across all Part D sponsors was 40 percent. Across Part D sponsors, approval rates ranged from 0 percent to 100 percent for all appeals during that period. PDP sponsors were somewhat more likely to approve coverage; the median rate of approvals for PDPs was about 45 percent, compared to about 38 percent for MA-PDs. At the second level of appeal, IRE appeals data show full or partial coverage approvals of the requested drug in about a quarter of the 11,679 reconsideration cases decided from July 2006 through June 2007. IRE data for this period show that the IRE either fully or partially approved coverage in 28 percent of appeals and denied coverage in 36 percent of appeals. A significant proportion of IRE cases, 34 percent, were dismissed for various reasons, such as the lack of AOR documentation. (See fig. 1.) The 11,679 cases reviewed by the IRE addressed a variety of issues. From July 2006 through June 2007, about one-third of IRE cases concerned a drug utilization restriction, such as a prior authorization requirement or quantity limit. Another 33 percent of IRE cases were requests for a drug not covered under Part D, such as a drug in one of the 10 Part D-excluded categories. Twenty-eight percent of cases were requests for Part D drugs not on the sponsor’s formulary. The remaining 5 percent of IRE cases involved issues such as requests to pay a lower cost-sharing level and reimbursement for drugs provided outside of the sponsor’s pharmacy network. IRE approval rates for Part D appeals were highest for disputes involving drug utilization restrictions and lowest for cases involving Part D-excluded drugs. The IRE fully or partially approved coverage in 39 percent of the appeals concerning a drug utilization restriction, 30 percent of appeals involving nonformulary drugs, and 18 percent of appeals for coverage of a drug that sponsors denied as an excluded drug under Part D. (See fig. 2.) As part of the decision process, the IRE determines whether the sponsor has met its obligation for coverage under the Part D rules. IRE staff told us that during the first year of the program, some sponsors denied requests because they did not fully consider the beneficiary’s overriding medical need for the requested drug, as CMS requires. In contrast, at the IRE, the beneficiary’s medical condition is the determining factor when the sponsor’s coverage criteria cannot be met. For example, in one case, a sponsor denied a physician’s request for the drug Celebrex—a drug used to treat arthritis and other conditions—because the physician did not provide documentation of the beneficiary’s trial and failure of the sponsor’s formulary medications—Naproxen, Ibuprofen, or Ketoprofen. In this case, the sponsor did not cover the requested drug because its step therapy requirement had not been met. However, in reviewing the case, the IRE applied medical necessity criteria because the prescribing physician stated that use of the sponsor’s preferred formulary alternatives were contraindicated for treatment of his patient’s condition. As a result, the IRE overturned the sponsor’s decision, stating that an exception to the sponsor’s step therapy requirement was warranted and that the sponsor should provide coverage of the drug until the end of the plan year. At our study sponsors and at the IRE, we found evidence that decisions on standard appeals submitted by prescribing physicians—redeterminations and reconsiderations—had been delayed and sometimes dismissed due to missing AOR forms. Without written authorization from the beneficiary, sponsors and the IRE may begin collecting relevant documentation to support a physician-submitted standard request, but they cannot complete their review. Also, the time frame for making the decision does not begin until the completed AOR form is received. According to most study sponsors and the IRE, if they do not receive the signed AOR form within a reasonable amount of time—which ranges from about a week to about a month after receiving the request—they deny or dismiss the request. Of the cases we reviewed at the study sponsors, missing AOR forms generated processing delays in 7 percent of cases. These delays were typically about 14 days, but could stretch to 67 days. At the IRE, missing AOR forms caused dismissals of about 9 percent of appeals, which is about one in every five reconsideration cases that were dismissed. Data on the prevalence of delays in processing redetermination requests attributable to missing AOR forms mask the fact that some sponsors in our study have developed “workarounds” to eliminate the need for a completed AOR form. For example, one sponsor told us it treats all physician appeals as expedited, regardless of the priority level indicated by the physician. Our review of a sample of sponsors’ case files showed that 26 percent of redetermination requests were classified as expedited compared to 4 percent of the coverage determination case files we reviewed. Although expediting requests precludes the need for an AOR form, one sponsor stated that because these requests may not be truly urgent, it may not be in the beneficiary’s best interest for the appeal to be rushed. Expedited appeals allow less time—72 hours versus 7 days—for reviewers to consider the evidence at hand or to request additional information, which might affect the outcome of the appeal. For the case files we reviewed, the denial rate for expedited redeterminations was 73 percent compared with a denial rate of 67 percent for standard redeterminations. In another workaround, sponsors obviate the need to obtain two signatures—the beneficiary’s to appoint the physician to act as a representative and the physician’s to accept the appointment—by arranging for the redetermination request to be made by the beneficiary. For example, one sponsor reported contacting beneficiaries to ask whether they want to initiate the redetermination instead of their physicians, who had contacted the sponsor first. Our case file reviews showed that beneficiaries made requests in about 36 percent of redetermination cases compared to 2 percent of coverage determination cases. This approach was designed to identify those beneficiaries who wish to initiate an appeal rather than having their physician appeal on their behalf, thus reducing the need for the AOR paperwork. Most sponsors in our study and IRE officials reported that the requirement that prescribing physicians be formally appointed beneficiary representatives with a signed AOR form in order to initiate standard appeals is an administrative impediment. The only actions prescribing physicians without explicit authorization cannot take are initiating the appeal, opening discussions with a sponsor or the IRE about an ongoing appeal requested by the beneficiary, or receiving notices of adverse standard redeterminations or reconsiderations. In practical terms, prescribing physicians’ involvement in a standard appeal does not differ significantly whether they are appointed representatives or not. CMS has improved its efforts to inform beneficiaries about sponsors’ performance, but its oversight of sponsors is hindered by poorly defined reporting requirements. CMS publicly reports information on two performance metrics: the rate at which sponsors met required time frames for decision making and the rate at which the IRE concurs with sponsors’ redetermination decisions. In November 2007, for one of these metrics, CMS modified the way it informs beneficiaries by grading sponsors’ performance against absolute benchmarks, rather than relative rankings as it had done previously. To oversee sponsors’ processes, CMS requires that sponsors report data on several coverage determinations and appeals measures; however, the agency provided minimal guidance on the information to be included in each coverage determination measure. As a result, our study sponsors have reported data differently to CMS, hindering the agency’s ability to monitor sponsors’ activities adequately. In its audits of PDP sponsors, CMS found that most of the sponsors it audited were noncompliant with many of the coverage determination and appeals requirements. Using quarterly IRE data, CMS has developed two performance metrics to gauge how well sponsors’ coverage determination and appeals processes are operating. CMS calculates metrics on (1) the rate at which sponsors met required time frames for coverage determinations and redeterminations, as measured by the number of cases, per 10,000 beneficiaries, automatically forwarded to the IRE because of delays in sponsors’ decision making; and (2) the rate at which the IRE concurs with sponsors’ redetermination decisions, as measured by the percentage of cases in which the IRE upheld, or agreed with, sponsors’ coverage denials. CMS officials told us that the agency selected these two performance metrics, in part, because beneficiaries could interpret their meaning easily. CMS includes the two metrics in information made available to the public on the Medicare Prescription Drug Plan Finder—a Web site designed to help beneficiaries compare drug plans. CMS account managers—staff responsible for overseeing sponsors’ performance—review sponsors’ scores on these performance metrics to monitor how well their coverage determination and appeals processes are operating. Sponsors with the highest rates of cases forwarded automatically to the IRE and the lowest percentages of cases in which the IRE agreed with their decisions are viewed as problematic. When a sponsor is identified as an outlier, the assigned account manager contacts the sponsor to discuss its coverage determination and appeal procedures and works with the sponsor to identify ways to improve its performance, such as conducting additional training sessions. Both the IRE and the sponsors in our study noted certain limitations in the data underlying each of these metrics. The number of automatically forwarded cases used for the timeliness metric may understate sponsors’ timeliness. According to IRE officials, some sponsors have forwarded cases to the IRE believing they had exceeded the required decision time frames when they had not. According to the officials, these sponsors automatically forwarded cases when they had not yet received a signed AOR form or a physician statement to support a coverage request. In such cases, the required time frames have not yet expired and the IRE returns the case to the sponsor for processing. Because these sponsors automatically fowarded cases to the IRE inappropriately, their rates of missed time frames are higher than they should be. Another limitation is that the performance metric on the IRE’s concurrence with sponsors’ decisions can be misleading. In discussing this measure with the sponsors in our study, one sponsor commented that a low rate of IRE agreement with their decisions implies, unfairly, that the sponsor’s decisions were flawed. They contend that the IRE often receives additional supporting evidence that results in an overturn, as we found by interviewing IRE officials. They state that had they received the same information within their time frame for processing the case, they may have approved the request. In their view, a low percentage of cases in which the IRE agrees with the sponsor’s decisions does not necessarily mean that the sponsor was not performing well. However, a CMS official asserted that sponsors are responsible for collecting all the information needed to adjudicate a request in the time allotted and are accountable if they do not obtain the same information available to the IRE. CMS uses these performance metrics to inform beneficiaries of sponsors’ performance and to encourage poor performing sponsors to do better. In an effort to improve the information shared with beneficiaries for the 2008 open enrollment period, the agency changed the manner in which it calculates and displays these metrics—using a star designation system. For the 2007 open enrollment period, CMS used 2006 data from the IRE to rank order sponsors’ rates, classify sponsors into groups based on sponsors’ relative performance, and assign a star designation to each group. For example, CMS chose to assign three stars, indicating very good performance, to 90 percent of sponsors for each metric. The next 5 percent of sponsors were assigned two stars, indicating acceptable performance, while the remaining sponsors were given one star, indicating poor performance. By setting the star designations using relative comparisons rather than defined benchmarks for different levels of performance, CMS implied that those sponsors receiving the most stars had superior performance while those with fewer stars were not meeting a CMS-set standard. The clustering of 90 percent of sponsors in the three-star designation could have been misinterpreted by beneficiaries as identifying those sponsors with superior performance when, in fact, by definition, 90 percent of sponsors received three stars. Moreover, the performance of sponsors in the top category varied significantly. For example, among the 26 PDP sponsors receiving three stars, the percentage of cases where the IRE concurred with sponsors’ redetermination decisions ranged from 39 to 75 percent. At the same time, the remaining categories were quite compressed. A relatively small difference in rates could have placed a sponsor in the lowest category rather than the highest category. CMS designated an IRE concurrence rate of 39 percent to be very good performance, but a 36 percent rate as acceptable performance, and 34 percent as poor performance. Recognizing the value of comparing sponsor performance against absolute standards (benchmarks), CMS changed its star designation system in time for the 2008 open enrollment period. For the performance metric on IRE concurrence, the agency now assigns sponsors to one of five star categories using fixed benchmarks rather than a percentile ranking. Table 2 shows how sponsors are assigned to different performance categories for the metric on IRE concurrence. For example, under the new designation system, only those sponsors with IRE concurrence rates better than 95 percent receive five stars, indicating excellent performance. Also, stars are only displayed for sponsors that have at least five appeals cases reviewed by the IRE. For the 2008 open enrollment period, CMS expanded its star designation system for the timeliness metric from three stars to five stars. Although it retained the relative ranking approach, CMS more evenly distributed the sponsors across the star categories. For example, whereas previously CMS assigned the top 90 percent of sponsors—those with the lowest rates of cases forwarded to the IRE because of missed time frames—the highest rating, the agency now assigns the highest rating to the top 15 percent of sponsors. Previously, CMS assigned 5 percent of sponsors the lowest rating, but now it assigns the lowest rating to 15 percent of the sponsors. The remaining sponsors are distributed more evenly across the two-, three-, and four-star designations. CMS continues to include among the top performing sponsors those with no cases forwarded to the IRE due to missed time frames. In our examination of 2006 publicly reported performance data, we found that, among the 60 PDP sponsors receiving three stars for making timely decisions, 21 did not forward any cases to the IRE because of missed time frames. CMS’s oversight of sponsors’ coverage determination and appeals processes include both monitoring and auditing. In monitoring the coverage determination processes, CMS reviews quarterly data reported by sponsors. The coverage determination measures selected for reporting capture information about the extent to which beneficiaries use the coverage determination process and the outcomes of that process. An agency official involved in selecting the measures to be reported noted that CMS sought to minimize the administrative burden on sponsors by selecting measures for which data were likely to be readily available. For 2006, the first year of the Part D program, CMS required sponsors to submit data on the following types of coverage determination cases: the number of requests and the number of approvals for formulary drugs requiring prior authorizations; the number of requests and the number of approvals for formulary exceptions, such as for nonformulary drugs; and the number of requests and the number of approvals for tiering exceptions. CMS used the submitted coverage determination data to calculate an overall request rate and an overall approval rate. In its analysis of the 2006 sponsor-reported data, CMS identified sponsors with relatively high overall rates of coverage requests and low overall rates of approvals. The agency wrote to these sponsors requesting that they confirm whether their submitted data were accurate and not the result of clerical errors. We found that our study sponsors submitted information differently to CMS because the agency provided limited guidance on the information to be included in each coverage determination measure. CMS defined the coverage determination measures sponsors are required to report too broadly, thus allowing each sponsor to use its existing data categorizations for each of the measures. After examining data reported for the third and fourth quarters of 2006, and following up with our study sponsors, we found substantial discrepancies in how sponsors reported these overall data for requests and approvals, as the following illustrate. While four of our seven sponsors said their measure of formulary drug requests requiring prior authorizations included requests for quantity limit exceptions, three sponsors included only a portion or none of these types of cases. For example, one sponsor told us that it omitted 6,032 requests for quantity limit exceptions in reporting the formulary drug request measure in the fourth quarter of 2006. These cases accounted for about 22 percent of the sponsor’s total coverage determination requests during that period. Another sponsor did not include 4,608 requests involving quantity limit exceptions in reporting the formulary drug request measure. These cases accounted for about 25 percent of all its coverage determination requests in the fourth quarter of 2006. Some, but not all, study sponsors included other types of cases in the requests and approvals for formulary drug measures. For example, three of our seven study sponsors included cases disputing coverage under Part B or Part D in their formulary drug measures, and four study sponsors included requests for drugs excluded from coverage under Part D. One of our seven study sponsors stated that, while it included all prior authorization requests in the formulary drug request measure, it included all requests for step therapy and quantity limits in the nonformulary drug request measure, based on a definition for nonformulary drugs in the Medicare Part D manual. In contrast, another sponsor in our study reported in the nonformulary drug category requests for drugs that it inadvertently did not include when designing its open formulary. We identified two sponsors that double counted the number of requested and approved tiering exceptions by reporting them in two different measures. For example, one of our study sponsors included 13,986 requests for tiering exceptions in its count of prior authorization requests for formulary drugs reported to CMS. The inclusion of these tiering exceptions in the number of requests for formulary drugs increased the requests for formulary drugs reported by about 43 percent. For the 2007 contract year, CMS made a number of modifications to its reporting requirements. CMS instructed sponsors to begin reporting data on the number of requests and approvals for quantity limit exceptions measures and renamed the other measures to better convey the types of coverage determinations to include in their reporting. CMS also instructed sponsors to exclude cases related to Part B versus Part D coverage from their data submissions. However, because CMS has yet to address categorization issues, such as whether the measures should be mutually exclusive, sponsors’ data reporting may remain inconsistent. Until data reliability issues are addressed, CMS may not be in a position to use these measures to oversee sponsors’ coverage determination process effectively. In it 2007 compliance audits of five PDP sponsors, CMS found numerous violations of Part D standards. The agency used an audit protocol that examined 13 elements related to the coverage determination process and 13 elements of the appeals processes. CMS auditors reported that the number of violations across sponsors ranged from 15 to 26 specific coverage determination and appeals process requirements. CMS has required sponsors to fix the violations by adopting corrective action plans. Areas of sponsor noncompliance ranged from incomplete written policies and procedures to delays in authorizing drug coverage after the IRE approved an expedited request. Auditors found that some sponsors did not notify beneficiaries of coverage decisions within the required time frames. Several sponsors were cited for not using CMS-approved decision notices; such notices must explain the reasons for denying requests or inform beneficiaries of their appeal rights. Other sponsors did not have policies to use physicians to review appeals of coverage requests denied for a lack of medical necessity. Table 3 shows those audit elements for which CMS found at least four of the five sponsors noncompliant. As of October 2, 2007, each of the five sponsors had submitted to CMS corrective action plans to remediate the identified deficiencies, which CMS was in the process of reviewing. A number of the audit findings indicate that the publicly reported performance metric on sponsor timeliness may not accurately reflect sponsors’ adherence to the requirement to automatically forward cases to the IRE. In reviewing case files, for example, CMS found that sponsors inconsistently forwarded standard coverage determination cases to the IRE when they did not meet the required CMS time frame, with one of the sponsors providing CMS with a written statement acknowledging that it had not forwarded any cases to the IRE for review during the audit period. Another two sponsors inappropriately allowed themselves more time to process certain coverage determination requests by starting their coverage determination review only after they received a supporting statement from the physician. In a separate initiative, CMS has worked with a selected group of sponsors to improve their performance on coverage determinations and appeals. Using a collaborative approach to performance improvement, CMS has conducted evaluations of two sponsors with comparatively high reversal rates at the IRE level of appeal to identify reasons why the IRE often did not agree with these sponsors’ prior coverage decisions. After examining a random sample of IRE case files for each sponsor in 2006, CMS identified several process-related issues that each sponsor could improve and provided feedback in the form of recommendations to each sponsor. For example, at one sponsor, CMS found that in about two-thirds of the reviewed cases, the sponsor should have done a better job of obtaining and assessing documentation of the evidence to support the request. The agency recommended that the sponsor revise certain forms in order to obtain all the information needed to make appropriate coverage determination decisions. CMS officials told us that both sponsors improved their performance by increasing the number of cases in which the IRE agreed with their decisions. As of September 2007, CMS was completing its evaluation of a third sponsor that did not receive a three- star designation for the performance metric based on the 2006 data. In the Part D program, beneficiaries’ access to prescription drugs is a function not only of whether a particular drug is on a plan’s formulary and whether it is subject to utilization management tools, but also how plan sponsors make individualized coverage decisions when requested. The Medicare drug benefit allows sponsors to operate in a regulated but flexible environment. Thus, sponsors in our study follow similar procedural steps but apply discretion in making coverage determinations and appeal decisions. Administrative barriers in the appeals process can have implications for beneficiaries’ drug coverage. Efforts to implement the requirement that prescribing physicians be formally appointed beneficiary representatives with a signed AOR form in order to initiate standard appeals have been cited as an impediment to the appeals process. We found evidence that missing AOR forms have caused delays and some dismissals in cases being considered. A more streamlined approach that reduces AOR paperwork by quickly identifying those beneficiaries who wish to initiate an appeal could improve the process while maintaining physician involvement. While CMS has improved its efforts to inform beneficiaries about sponsors’ performance, its oversight efforts remain mixed. The agency has begun to hold sponsors accountable for maintaining compliance with coverage determination and appeals requirements. Agency auditors cited sponsors for widespread deficiencies and have required them to revise procedures to better serve beneficiaries. However, CMS lacks the data it needs to routinely monitor coverage determination and appeals requests and approvals across all sponsors. The agency has not taken steps necessary to ensure that sponsors report data consistently. To improve the Medicare Part D coverage determination and appeals processes, we recommend that the Administrator of CMS: reduce the need for completed AOR forms by requiring sponsors and the IRE, upon receipt of standard appeal requests submitted by prescribing physicians without completed AOR forms, to telephone beneficiaries to determine whether they wish to initiate the appeal, and ensure that sponsor-reported data used for monitoring coverage determination and appeals activities are accurate and consistent by providing specific data definitions for each measure. In written comments on a draft of this report, CMS remarked that our review presents a balanced evaluation of Part D coverage determination and appeals procedures and the associated data reporting procedures, and does an excellent job of highlighting various challenges in the Part D appeals process. (See app. II.) The agency reported that it is exploring the adoption of one of the report’s recommendations and is in the process of implementing the other. In addition to comments on each of our recommendations, CMS provided detailed, technical comments that we incorporated where appropriate. CMS stated that it intends to consider our recommendation that the need for a signed AOR form be reduced through a process where sponsors call beneficiaries when physicians request appeals on their patients’ behalf. However, it noted that it was not certain whether any change to the current policy could be implemented without modifying the statutory and regulatory provisions associated with the AOR requirement. The agency pointed out that physician representation of beneficiaries is limited by law because only a Medicare Part D eligible individual can bring an appeal at the IRE level. Therefore, CMS said that it is reviewing the current legal requirements about making appeal requests to determine whether changes are appropriate and necessary. CMS added that it intends to work with physician groups to ensure that physicians promptly submit any needed AOR forms. We are pleased that CMS is considering how it can implement our recommendation to address the difficulties regarding the AOR requirement. In making this recommendation, we considered relevant statutory and regulatory provisions and found no limitations that would preclude its adoption by CMS. Our recommendation would reduce the need for AOR forms by requiring that sponsors and the IRE determine at the outset whether beneficiaries want to initiate their appeals or have physicians do so on their behalf. If it is determined that the beneficiary is requesting the appeal, an AOR form would not be needed and the sponsor or IRE could immediately process the request. However, if sponsors or IRE find that beneficiaries want their physicians to initiate the appeal for them, then completed AOR forms would still be required. We have slightly reworded our recommendation, to clarify our intent and eliminate any ambiguity, and included the revised language in the final report. CMS agreed with our recommendation to ensure that sponsor-reported data are accurate and consistent by providing specific data definitions for the coverage determination and appeals measures. The agency noted that it has taken steps to modify the Part D Plan Reporting Requirements guidance on data element definitions. It plans to reinforce this guidance during upcoming calls with Part D sponsors, as well as in memoranda to sponsors, Frequently Asked Questions documents, and conference presentations. In addition, to minimize data entry errors, CMS has implemented data edit rules that will, among other things, reject a value that exceeds an expected range. It also developed procedures for sponsors to correct previously submitted information. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this report. We will then send copies to the Administrator of CMS, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. This report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Kathleen King at (202) 512-7114 or kingk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix III. In addition to the contact named above, Rosamond Katz, Assistant Director; Lori Achman; Todd Anderson; Hazel Bailey; Krister Friday; Lisa Rogers; and Jennifer Whitworth made major contributions to this report.
Under the Medicare Part D program, prescription drug coverage is provided through plans sponsored by private companies. Beneficiaries, their appointed representatives, or physicians can ask sponsors to cover prescriptions restricted under their plan--a process known as a coverage determination--and can appeal denials to the sponsor and the independent review entity (IRE). GAO was asked to review (1) the processes for sponsors' coverage determination decisions and the approval rates, (2) the processes for appealing coverage denials and the approval rates at the sponsor and IRE levels, and (3) the Centers for Medicare & Medicaid Services' (CMS) efforts to inform the public about sponsors' performance and oversee sponsors' processes. GAO visited seven sponsors that account for over half of Part D enrollment. GAO also interviewed and obtained data from CMS and IRE officials. Sponsors in our study address coverage requests for drugs with restrictions using processes that allow for prompt decisions, apply a range of criteria, and have resulted in approvals of most cases. To minimize the amount of time needed to make a determination, study sponsors use automated systems to compare the patient information they receive from prescribing physicians against preset coverage criteria. The coverage criteria for specific drugs incorporate Medicare requirements--such as whether the drug use is excluded from coverage under Medicare Part D--and discretionary components--such as whether a less expensive alternative drug has been tried and failed. Some study sponsors indicated they feel pressure to make decisions within the CMS-required time frames even when all pertinent patient information from physicians is not at hand. In reviewing a sample of 421 case files, GAO found that overall, study sponsors approved about 67 percent of the coverage determination requests, ranging from 57 percent to 76 percent. The process for conducting appeals allows staff not involved in the previous case review to make better-informed decisions by considering additional supporting evidence. At the first level of appeal, sponsor staff evaluate any corrected or augmented evidence to see if coverage criteria have been met. At the second level of appeal, IRE staff consider the information the sponsor reviewed, along with any additional support that may be available. In many cases, appeals result in new interpretations of whether the requested drug should be covered. CMS appeals data show that, from July 2006 through December 2006, the median approval rate across all Part D sponsors was 40 percent; from July 2006 through June 2007, appeals to the IRE received full or partial approval in 28 percent of cases. For some standard appeals, missing appointment of representative (AOR) documentation contributed to delays in sponsor-level appeals decisions and dismissals of IRE appeals cases. Some study sponsors have developed "workarounds" to eliminate the need for the completed AOR form. CMS has improved its efforts to inform beneficiaries about sponsors' performance, but its oversight of sponsors is hindered by poorly defined reporting requirements. CMS developed two performance metrics on sponsors' timeliness and the outcomes of their coverage decisions. The agency improved the way it displays this information on the Medicare Web site in late 2007. In addition, CMS requires that sponsors report data on various measures of coverage requests and approvals. However, the agency has provided minimal guidance on the types of cases to be included in each coverage determination measure. As a result, our study sponsors reported data differently to CMS, hindering the agency's ability to adequately monitor sponsors' activities. Finally, CMS has conducted several audits and found that sponsors were noncompliant with a number of specific requirements. Areas of sponsor noncompliance ranged from incomplete written policies and procedures to delays in authorizing drug coverage after the IRE approved an urgent request.
Federal awareness of the importance of securing our nation’s critical infrastructures, which underpin our society, economy, and national security, has been evolving since the mid-1990’s. Over the years, a variety of working groups has been formed, special reports have been written, federal policies issued, and organizations created to address the issues that have been raised. In October 1997, the President’s Commission on Critical Infrastructure Protection issued its report, describing the potentially devastating implications of poor information security from a national perspective. The report recommended several measures to achieve a higher level of CIP, including infrastructure protection through industry cooperation and information sharing, a national organization structure, a revised program of research and development, a broad program of awareness and education, and reconsideration of laws related to infrastructure protection. The report stated that a comprehensive effort would need to “include a system of surveillance, assessment, early warning, and response mechanisms to mitigate the potential for cyberthreats.” The financial services sector was highlighted as one of several critical infrastructures that were vital to our nation’s economic security. In 1998, the President issued Presidential Decision Directive 63 (PDD 63), which established CIP as a national goal and described a strategy for cooperative efforts by government and the private sector to protect the physical and cyber-based systems essential to the minimum operations of the economy and the government. PDD 63 called for a range of actions intended to improve federal agencies’ security programs, improve the nation’s ability to detect and respond to serious computer-based and physical attacks, and establish a partnership between the government and the private sector. The directive called on the federal government to serve as a model of how infrastructure assurance is best achieved and designated lead agencies to work with private-sector and government organizations. To accomplish its goals, PDD 63 established and designated organizations to provide central coordination and support, including the Critical Infrastructure Assurance Office (CIAO), an interagency office housed in the Department of Commerce, which was established to develop a national plan for CIP on the basis of infrastructure plans developed by the private sector and federal agencies; the National Infrastructure Protection Center (NIPC), an organization within the FBI, which was expanded to address national-level threat assessment, warning, vulnerability, and law enforcement investigation and response; and the National Infrastructure Assurance Council, which was established to enhance the partnership of the public and private sectors in protecting our critical infrastructures. To ensure coverage of critical sectors, PDD 63 also identified eight private- sector infrastructures, including banking and finance, and five special functions. For each of the infrastuctures and functions, the directive designated lead federal agencies, known as sector liaisons, to work with their counterparts in the private sector, known as sector coordinators. For example, Treasury is responsible for working with the financial services sector, and the Department of Energy is responsible for working with the electrical power industry. Similarly, regarding special function areas, the Department of Defense is responsible for national defense, and the Department of State is responsible for foreign affairs. PDD 63 called for a range of actitivites intended to establish a partnership between the public and private sectors to ensure the security of our nation’s critical infrastructures. The sector liaison and the sector coordinator were to work with each other to address problems related to CIP for their sector. In particular, PDD 63 stated that they were to (1) develop and implement a vulnerability awareness and education program and (2) contribute to a sectoral National Infrastructure Assurance Plan by assessing the vulnerabilities of the sector to cyber or physical attacks; recommending a plan to eliminate significant vulnerabilities; proposing a system for identifying and preventing major attacks; and developing a plan for alerting, containing, and rebuffing an attack in progress and then, in coordination with the Federal Emergency Management Agency as appropriate, rapidly reconstituting minimum essential capabilities in the aftermath of an attack. PDD 63 also stated that sector liaisons should identify and assess economic incentives to encourage the desired sector behavior in CIP. Further, to facilitate private-sector participation, it encouraged the voluntary creation of information sharing and analysis centers (ISACs) that could serve as mechanisms for gathering, analyzing, and appropriately sanitizing and disseminating information to and from infrastructure sectors and the federal government through NIPC. In response to PDD 63, a banking and finance sector coordinating committee on CIP, chaired by a sector coordinator, was initiated by the Secretary of the Treasury in October 1998. In addition, the Financial Services ISAC (FS-ISAC) was formed in 1999. In January 2000, the White House issued its National Plan for Information Systems Protection. The national plan provided a vision and a framework for the federal government to prevent, detect, respond to, and protect the nation’s critical cyber-based infrastructure from attack and reduce existing vulnerabilities by complementing and focusing existing federal computer security and information technology requirements. Subsequent versions of the plan were expected to (1) define the roles of industry and of state and local governments working in partnership with the federal government to protect physical and cyber-based infrastructures from deliberate attack and (2) examine the international aspects of CIP. In October 2001, the President signed Executive Order 13231, establishing the President’s Critical Infrastructure Protection Board to coordinate cyber-related federal efforts and programs associated with protecting our nation’s critical infrastructures. The Special Advisor to the President for Cyberspace Security chairs the board. Executive Order 13231 tasks the board with recommending policies and coordinating programs for protecting CIP-related information systems. The board was intended to coordinate with the Office of Homeland Security in activities related to protection and recovery from attacks against information systems for critical infrastructure, including emergency preparedness communications that were assigned to the Office of Homeland Security by Executive Order 13228, dated October 8, 2001. According to Executive Order 13231, the board recommends policies and coordinates programs for protecting information systems for critical infrastructures, including emergency preparedness communications and the physical assets that support such systems. The Special Advisor reports to the Assistant to the President for National Security Affairs and to the Assistant to the President for Homeland Security. In addition, the Special Advisor, as chair of the board, coordinates with the Assistant to the President for Economic Policy on issues related to private-sector systems and economic effects and with the Director of the Office of Management and Budget (OMB) on issues related to budgets and the security of federal computer systems. Executive Order 13231 reiterated the importance and voluntary nature of the Information Sharing and Analysis Centers (ISACs). Executive Order 13231 also established 10 standing committees to support the board’s work on a wide range of critical infrastructure efforts. The Financial and Banking Information Infrastructure Committee (FBIIC), one of the standing committees, is charged with coordinating federal and state financial regulatory efforts to improve the reliability and security of the U.S. financial system. Chaired by the Department of the Treasury’s Assistant Secretary for Financial Institutions, FBIIC includes representatives from federal and state financial regulatory agencies, including the Commodity Futures Trading Commission, the Conference of State Bank Supervisors, the Federal Deposit Insurance Corporation (FDIC), the Federal Housing Finance Board, the Federal Reserve Bank of New York, the Federal Reserve Board, the National Association of Insurance Commissioners (NAIC), the National Credit Union Administration (NCUA), the Office of the Comptroller of the Currency (OCC), the Office of Federal Housing Enterprise Oversight, the Office of Homeland Security, the Office of Cyberspace Security, the Office of Thrift Supervision (OTS), and the Securities and Exchange Commission (SEC). Consistent with PDD 63, industry representatives worked collaboratively on a Treasury-sponsored working group to develop the sector’s national strategy—Defending America’s Cyberspace: Banking and Finance Sector: The National Strategy for Critical Infrastructure Assurance, Version 1.0. Treasury’s Assistant Secretary for Financial Institutions submitted the industry’s strategy, in May 2002, to the Special Advisor to the President for Cyberspace Security, with the understanding that it would provide an evolving baseline for the sector’s efforts. In July 2002, the President issued the National Strategy for Homeland Security to “mobilize and organize our nation to secure the United States homeland from terrorist attacks.” According to the strategy, the primary objectives of homeland security, in order of priority, are to (1) prevent terrorist attacks within the United States, (2) reduce America’s vulnerability to terrorism, and (3) minimize the damage and recover from attacks that do occur. The strategy identifies two critical components of CIP—critical infrastructure and intelligence and warning—as two of six mission areas. The strategy further states that if terrorists attack one or more pieces of our critical infrastructure, they may disrupt entire systems and significantly damage the nation. In addition, the national strategy continues to identify banking and finance as a critical infrastructure sector, and it adds additional sectors, as shown in table 1. On September 18, 2002, the administration released a draft National Strategy to Secure Cyberspace. The draft was developed by the President’s Critical Infrastructure Protection Board on the basis of input from officials associated with key sectors of the economy that rely on cyberspace, state and local governments, colleges and universities, and others. The draft strategy contains 86 recommendations for home users and small businesses; large private-sector corporations; federal, state, and local governments; critical sectors; and colleges and universities—among others. The draft strategy supplements existing strategies, including the National Strategy for Homeland Security, and states that the strategies’ policy statements and recommendations are subject to Executive Order 13231 and other relevant executive orders related to national security. The draft strategy calls for the continued use of public/private partnerships established through the lead federal agencies and the private-sector coordinators and the ISACs. The draft strategy is consistent with the National Strategy for Homeland Security concerning lead agency responsibilities. On November 25, 2002, the President signed the Homeland Security Act of 2002, establishing the Department of Homeland Security. Regarding critical infrastructure protection, the new department is responsible for, among other things, (1) developing a comprehensive national plan for securing the key resources and critical infrastructure of the United States; (2) recommending measures to protect the key resources and critical infrastructure of the United States in coordination with other federal agencies and in cooperation with state and local government agencies and authorities, the private sector, and other entities; and (3) disseminating, as appropriate, information analyzed by the department—both within the department and to other federal agencies, state and local government agencies, and private sector entities—to assist in the deterrence, prevention, preemption of, or response to terrorist attacks. The act also transfers the functions, personnel, assets, and liabilities of NIPC (other than the Computer Investigations and Operations Section) and CIAO to the new department. According to statistics from the Federal Reserve Board, U.S. financial institutions held over $23.5 trillion in assets as of the second quarter of 2002—about a $2 trillion dollar increase over first quarter 2001 statistics reported in the sector’s national strategy. Some of the largest categories of financial institutions are commercial banks ($5.3 trillion), insurance companies ($2.7 trillion), mutual funds ($2.7 trillion), government- sponsored enterprises ($2.2 trillion), and pension funds ($1.5 trillion). The remaining assets are distributed among finance and mortgage companies, securities brokers and dealers, and other financial institutions. The sector’s national strategy states that the composition of the financial services sector extends beyond these companies to include a network of essential specialized service organizations and service providers who support the sector in its efforts to provide a trusted services environment; these include securities and commodities exchanges, funds transfer networks, payment networks, clearing companies, trust and custody firms, and depositories and messaging systems. According to the national strategy, the financial services sector has also become more dependent on outsourcing certain activities—such as systems and applications, hardware and software, as well as technically skilled personnel—to third-party providers that are an indispensable part of the sector’s infrastructure. Several regulatory agencies oversee various aspects of the financial services industry. Table 2 provides an overview of the key industry segments and the regulatory bodies that oversee them. Five federal regulators—the Federal Reserve System (FRS), the Federal Deposit Insurance Corporation (FDIC), the Office of the Comptroller of the Currency (OCC), the Office of Thrift Supervision (OTS), and the National Credit Union Administration (NCUA)—supervise and examine all federally insured depository institutions. The regulators oversee a mix of large, medium, and small depository institutions, as shown in table 3. Banking regulators also work together through the Federal Financial Institutions Examinations Council (FFIEC), an interagency forum that Congress created in 1979 to promote consistency in the examination and supervision of depository institutions. For example, the Information Technology Subcommittee of the FFIEC Task Force on Supervision supervises the largest 18 to 20 technology service providers, and the regulators’ regional offices supervise smaller technology service providers. The regulators also issue policies, procedures, rules, legal interpretations, and corporate decisions concerning banking, credit, bank investments, asset management, fair lending and consumer protection, community reinvestment activities, and other aspects of bank operations. Under Section 111 of the Federal Deposit Insurance Corporation Improvement Act of 1991, each federal banking regulator, with the exception of NCUA, is required to conduct a full-scope, on-site examination of federally insured depository institutions under its jurisdiction at least once during each 12-month period. The act allows for examinations to be extended to 18 months for small (less than $250 million in assets), well- capitalized, well-managed institutions that meet certain criteria. The primary objectives of such examinations of financial institutions, known as safety-and-soundness examinations, are to (1) provide an objective evaluation of the institution’s safety and soundness, determine compliance with applicable laws, rules, and regulations; and ensure that it maintains capital commensurate with its risk; (2) appraise the quality and overall effectiveness of management and their risk management systems; and (3) identify, communicate, and follow up in all areas of the examination’s recommendations, especially in areas where corrective action is required to strengthen the bank’s performance and compliance with laws, rules, and regulations. The financial institution safety-and-soundness examination assesses six components of a financial institution’s performance—capital adequacy, asset quality, management, earnings, liquidity, and sensitivity to market risk. As part of these six components, examiners also consider the adequacy of the financial institution’s internal controls, internal and external audit, and compliance with law, in addition to evaluating the institution’s management’s ability to identify and control risk. Additionally, examiners evaluate the financial institution’s use of information technology and third party service providers, including information technology-related servicers. To assist examiners in assessing information technology risks to plan their examinations, FFIEC developed the Uniform Rating System for Information Technology (URSIT), to provide rating definitions for the information technology examinations of financial institutions and their technology service providers. The URSIT composite rating is considered in the overall management component of the examination. According to FFIEC, the purpose of the rating is to provide a consistent means of evaluating the condition or performance of information technology functions and to provide a mechanism for monitoring those entities whose condition or performance require special supervisory attention. Using URSIT, examiners consider the adequacy of the financial institution’s information technology risk management practices; management of information technology resources; and integrity, confidentiality, and availability of automated information. The evaluation of these components can include, but is not limited to, business continuity, information security, network services, change control management, systems development life cycle, audit, internal controls, architecture, vendor management, and board oversight. SEC’s primary mission is to protect investors, maintain the integrity of the securities markets, and oversee the activities of a variety of key market participants. In 2001, SEC was responsible for overseeing 9 exchanges; the over-the-counter market; approximately 70 alternative trading systems, including electronic communication networks; 12 registered clearing agencies; about 8,000 registered broker-dealers employing almost 700,000 registered representatives; almost 900 transfer agents; over 900 investment company complexes; and 7,400 registered investment advisers. In addition, about 14,000 companies that have issued securities have filed annual reports with SEC. SEC’s oversight includes rulemaking, surveilling the markets, interpreting laws and regulations, reviewing corporate filings, processing applications, conducting inspections and examinations, and determining compliance with federal securities laws. It is also responsible for regulating public utility holding companies. Staff within SEC’s Market Regulation Division are responsible for examinations of exchanges, clearing organizations, and electronic communication networks. Staff from its Office of Compliance Inspections and Examinations are responsible for examinations of broker-dealers and investment companies. SEC does not directly regulate entities that provide information technology services to firms under its jurisdiction. Broker- dealers and exchanges also operate under rules set by the securities industry’s self-regulatory organizations, including the National Association of Securities Dealers and the New York Stock Exchange. In addition, NAIC assists state insurance regulators in their efforts to protect the interests of insurance consumers. NAIC, which comprises insurance regulators from all 50 states, the District of Columbia, and the four U.S. territories, helps facilitate the regulation of financial and market conduct at the state level. Increased access to systems created by widespread computer interconnectivity poses significant risks to our nation’s computer systems and, more importantly, to the critical operations and infrastructures they support. The speed and accessibility that create the enormous benefits of the computer age likewise, if not properly controlled, allow individuals and organizations to inexpensively eavesdrop on or interfere with these operations from remote locations for mischievous or malicious purposes, including fraud or sabotage. Table 4 summarizes the key threats to our nation’s infrastructures, as observed by the FBI. Government officials are increasingly concerned about attacks from individuals and groups with malicious intent, such as crime, terrorism, foreign intelligence gathering, and acts of war. According to the FBI, terrorists, transnational criminals, and intelligence services are quickly becoming aware of and are using information exploitation tools such as computer viruses, Trojan horses, worms, logic bombs, and eavesdropping sniffers that can destroy, intercept, degrade the integrity of, or deny access to data. In addition, the disgruntled organization insider is a significant threat, since these individuals often have knowledge that allows them to gain unrestricted access and inflict damage or steal assets without possessing a great deal of knowledge about computer intrusions. The number of computer security incidents reported to the CERT® Coordination Center (CERT®CC) rose from 9,859 in 1999, to 52,658 in 2001, and to 82,094 in 2002. And these are only the reported attacks. The Director, CERT® Centers, stated that as much as 80 percent of actual security incidents goes unreported, in most cases because the organization (1) was unable to recognize that its systems had been penetrated because there were no indications of penetration or attack or (2) was reluctant to report incidents. Figure 1 shows the number of incidents reported to the CERT CC from 1995 through 2002. According to the National Strategy for Homeland Security, terrorist groups are already exploiting new information technology and the Internet to plan attacks, raise funds, spread propaganda, collect information, and communicate securely. The administration’s draft National Strategy to Secure Cyberspace states that cyber incidents are increasing in number, sophistication, severity, and cost. It further adds that cyber attacks on U.S. information networks occur regularly and can have serious consequences, such as disrupting critical operations, causing loss of revenue and intellectual property, and even causing loss of life. Since the September 11, 2001, terrorist attacks, warnings of the potential for terrorist cyber attacks against our critical infrastructures have increased. For example, last year the Special Advisor to the President for Cyberspace Security stated in a Senate briefing that although to date none of the traditional terrorist groups, such as al Qaeda, have used the Internet to launch a known attack on the U.S. infrastructure, information on computerized water systems was recently discovered on computers found in al Qaeda camps in Afghanistan. Further, in his October 2001 congressional testimony, Governor James Gilmore warned that systems and services critical to the American economy and the health of our citizens—such as financial services, “just-in-time” delivery systems for goods, hospitals, and state and local emergency services—could all be shut down or severely handicapped by a cyber attack or a physical attack against computer hardware. Not only is cyber protection of our critical infrastructures important in and of itself, but a physical attack in conjunction with a cyber attack has recently been highlighted as a major concern. In fact, NIPC has stated that the potential for compound cyber and physical attacks, referred to as “swarming attacks,” is an emerging threat to the U.S. critical infrastructure. As NIPC reports, the effects of a swarming attack include slowing or complicating the response to a physical attack. For example, cyber attacks can be used to delay the notification of emergency services and to deny the resources needed to manage the consequences of a physical attack. In addition, a swarming attack could be used to worsen the effects of a physical attack. For example, a cyber attack on a natural gas distribution pipeline that opens safety valves and releases fuels or gas in the area of a planned physical attack could enhance the force of the physical attack. The financial services sector faces cyber threats similar to those faced by other critical infrastructure sectors, but the potential for monetary gains and economic disruptions may increase its attractiveness as a target. Financial services institutions have experienced cyber incidents that have had some impact on their operations, which demonstrates a continuing threat to the industry. Also, the financial services sector is highly dependent on other critical infrastructures. For example, threats facing the telecommunications and power sectors could directly affect the financial services industry. However, after the September 11, 2001, terrorist attacks, the financial markets were able to recover within days, despite significant damage to the World Trade Center area, where a significant concentration of financial entities is located. According to government and private-sector officials, the financial services sector faces cyber threats similar to those faced by other critical infrastructure sectors. As discussed in the previous section of this report, such threats include attacks from individuals and groups with malicious intent, such as crime, terrorism, and foreign intelligence. Because it holds over $23.5 trillion in assets, the potential monetary gains and economic disruptions that could occur if the financial services sector’s systems were successfully attacked may increase the probability of its becoming a target. For example, a successful widespread cyber attack could erode public confidence in financial institutions, deny businesses and individuals access to their funds, result in the loss of funds, affect the integrity of financial information, or inhibit securities trading. At the same time, sector representatives believe that financial institutions recognize and work to mitigate the threat in order to adhere to federal and state regulations and maintain public confidence in their ability to protect and manage customer assets. The report of the President’s Commission on Critical Infrastructure Protection in 1997 recognized that—on an institutional level, increasing use of electronic banking mechanisms, and perhaps an entirely new infrastructure to accommodate the demand for rapid data recall and payment processing—would create new forms of risk to information systems. Further, regarding the financial services sector, the report of the President’s Commission on Critical Infrastructure Protection identified cyber threats to the financial services industry and the corresponding need to improve (1) information sharing between regulators, law enforcement officials, and industry associations; (2) contingency planning through sponsoring strategic simulations and determining the need for additional back-up facilities; (3) examination processes, audit practices, internal controls, and physical security measures to accommodate new kinds of risks and to help deter the insider threat; and (4) information security education and awareness programs within academia and in the general public. The Banking and Finance Sector: National Strategy for Critical Infrastructure Assurance, issued on May 13, 2002, acknowledged that the sector would continue to face physical and cyber threats domestically and internationally. In addition, it stated that cyber threats and vulnerabilities are among the biggest challenges facing the sector, that cyber vulnerabilities and crimes have increased exponentially since the start of the new century, and that this trend will increase in proportion to the reliance placed on technology. Officials from the federal government’s NIPC similarly stated that the number of cyber threats faced by the financial services sector has increased. Regarding physical threat, NIPC released an information bulletin in April 2002 warning against possible physical attacks on U.S. financial institutions by unspecified terrorists. The financial services sector’s strategy also acknowledged the insider threat, stating that as financial institutions eliminate redundant operations and reduce personnel costs, the reductions can lead to vengeful acts by departing employees, as well as by dissatisfied employees among the remaining staff. The financial services sector has been impacted by the successful exploitation of cyber vulnerabilities. For example, the 2002 report of the Computer Crime and Security Survey, conducted by the Computer Security Institute and the FBI’s San Francisco Computer Intrusion Squad, showed that 90 percent of respondents (primarily large corporations and government agencies, including 19 percent from the financial services sector) had detected computer security breaches within the last 12 months. In addition, 80 percent of respondents acknowledged financial losses due to computer breaches. Respondents willing or able to quantify their financial losses reported losses of over $450 million in total, including over $170 million from the loss of proprietary information and over $115 million from financial fraud. A report on Internet security threats by a private-sector managed security firm for the period of January 1, 2002, to June 30, 2002, concluded that companies in the financial services industry, along with energy and high- tech companies, experience the highest rate of attack activity, based on their clients’ experience. According to the study, financial service firms received an average of 1,018 attacks per company, and 46 percent of these firms had at least one severe attack during the period studied. Across all industries, the average number of attacks per company was about 788. The following examples of financial services-related incidents have been publicly reported. According to media reports, in 1994, a Russian hacker broke into Citibank’s system, stealing $10 million. The company recovered all but $400,000 of that loss, and the case resulted in a felony conviction of the primary hacker. In 2000, two men from Kazakhstan were arrested in London for breaking into Bloomberg L.P.’s computer systems in New York in an attempt to extort $200,000 from the firm, according to NIPC and media reports. Since April 1996, depository institutions have reported to their regulators, through the Suspicious Activity Report System (SARS), any suspicious transactions involving $5,000 or more. The requirement to report computer intrusions through this system started in June 2000. As of May 31, 2002, there have been 656 such filings. For the 6-month period, based on information from a sample of its client organizations, Riptech analyzed firewall logs and intrusion detection system alerts. From these initial data, more than 1 million possible attacks were isolated and more than 180,000 confirmed. The financial services industry and the federal government have raised concerns about the financial services sector’s interdependency with other critical infrastructures, including telecommunications and energy, and the potential negative impact that attacks in those sectors could have on its ability to operate. Understanding the many interdependencies between sectors is critical to successfully protecting all of our nation’s critical infrastructures. According to a January 2001 report by the CIP Research and Development Interagency Working Group, the effect of interdependencies is that a disruption in one infrastructure can spread and appreciably affect other infrastructures. The report also stated that understanding interdependencies is important because the proliferation of information technology has made the infrastructures more interconnected. In congressional testimony in July 2002, the director of Sandia National Laboratories’ Infrastructure and Information Systems Center stated that these interdependencies make it difficult to identify critical nodes, vulnerabilities, and optimal mitigation strategies. According to the financial services sector’s national strategy, the industry must take into account the effect of damage from disruptions in other critical sectors, such as telecommunications, electrical power, and transportation. The attacks of September 11, 2001, demonstrated the dependence of the financial services industry on the stability of other sectors’ infrastructures. For example, the industry suffered the impact of disrupted communications for its broker-dealers, clearing banks, and other core institutions. The draft National Strategy to Secure Cyberspace also discusses the risks posed by interdependent sectors. It states that unsecured sectors of the economy can be used to attack other sectors and that disruptions in one sector have cascading effects that can disrupt multiple parts of the nation’s critical infrastructure. Potential vulnerabilities of the telecommunications and energy sectors, two sectors relied upon by the financial services sector, are highlighted next. In February 2002, the National Security Telecommunications Advisory Committee and the National Communications System released a report, An Assessment of the Risk to the Security of the Public Network, about the vulnerabilities of the telecommunications sector. This report concluded that (1) the vulnerability of the public network to electronic intrusion has increased, (2) government and industry organizations have worked diligently to improve protection measures, (3) the threat to the public network continues to grow as it becomes a more valuable target and the intruder community develops more sophisticated capabilities to launch attacks against it, and (4) continuing trends in law enforcement and legislation have increased the ability of the government and the private sector to deter the threat of intrusion. The report also stated that the implementation of next-generation network technologies, including wireless technology, and their convergence with traditional networks, have introduced even more vulnerabilities into the public network. Energy sector vulnerabilities have also been identified. For example, in October 1997, the President’s Commission on CIP reported on the physical vulnerabilities for electric power related to substations, generation facilities, and transmission lines. It further added that the widespread and increasing use of supervisory control and data acquisition (SCADA) systems for controlling energy systems increases the capability of seriously damaging and disrupting them by cyber means. In addition, the previously discussed Internet security threat report also concluded that companies in the energy industry, along with financial services and high-tech companies, experience the highest rate of overall attack activity. According to the study, power and energy firms received an average of 1,280 attacks per company, and 70 percent of them had at least one severe attack during the period studied. Financial services industry groups have taken several steps to address cyber threats and improve information sharing, and they plan to take continuing action to further address these issues. First, industry representatives collaboratively developed a sector strategy—National Strategy for Critical Infrastructure Assurance—that discusses additional efforts necessary to identify, assess, and respond to sectorwide threats. However, the financial services sector has not specified how the efforts will be implemented, by providing interim objectives, detailed tasks, timeframes, responsibilities, or processes for measuring progress. Second, FS-ISAC was formed in October 1999 to, among other objectives, facilitate sharing of information and provide its members with early notification of computer vulnerabilities and attacks. Third, several other industry groups representing the various segments of the financial services sector are taking steps to better coordinate industry efforts and to improve information security across the sector. Industry representatives worked collaboratively on a Treasury-sponsored working group to develop the sector’s National Strategy for Critical Infrastructure Assurance, which identifies a framework for sector actions—including efforts necessary to identify, assess, and respond to sectorwide threats, including completing a sectorwide vulnerability assessment. In May 2002, Treasury’s Assistant Secretary for Financial Institutions submitted the industry’s strategy to the Special Advisor to the President for Cyberspace Security, with the understanding that it would provide an evolving baseline for the sector’s efforts. The strategy presents a framework for planning and implementing sector action that includes analyzing the infrastructure’s strengths, interdependencies, vulnerabilities, and abilities to resolve virtual and physical issues and concerns; taking steps to strengthen the sector’s capacity to prepare for, defend against, and recover financially and technologically from systemic attacks; building and implementing strategies for detecting and responding to attacks on the information infrastructure of the financial services sector; having the ability to recover and restore technological and financial services and functions to their normal state of operation; and having the ability to financially withstand the impact of attacks. Generally, the strategy discusses the activities called for in PDD 63, as described earlier in this report, including assessing the vulnerabilities of the sector to cyber or physical attack, recommending a plan to eliminate vulnerabilities, proposing a system for identifying and preventing major attacks, and developing a plan for alerting, containing, and rebuffing an attack in progress and then rapidly reconstituting essential operations. In addition, the strategy is generally consistent with the recommendations in the President’s Commission report, as discussed earlier in this report, including addressing (1) a mechanism for information sharing about threats and vulnerabilities; (2) efforts to improve the industry’s business continuity planning and ability to recover from disasters, including the need for back-up locations; and (3) actions taken to educate industry executives and information security specialists. In response to PDD 63’s call for a sectorwide vulnerability assessment, the sector’s national strategy identifies a number of options for completing an assessment, including (1) with the support of the Department of the Treasury, initiating an effort to identify and assess existing areas of exposure and interdependencies that would pose systemic risk to the banking and finance sector; (2) performing semiannual reviews of the infrastructure for newly identified weaknesses or risks based on technology changes; and (3) evaluating the feasibility of developing and maintaining an industrywide model and simulation process for assessing and addressing the systemic effects of threats to the core infrastructure. The strategy also states that critical components of the infrastructure must be subject to frequent, rigorous review and assessment of their posture and practices and suggests various approaches to achieve this goal, such as: (1) periodic self-assessments; (2) external assessments and audits of core institutions and/or processes by trusted third parties; (3) formal analysis and assessments of industrywide transaction flows, processes, and procedures in critical areas of service provision; and (4) cross-industry interdependency assessments. Also, the national strategy for the financial services sector recommends a number of other actions, including designing and implementing modeling efforts—business, mathematical, and others—to be used to assess and understand the impact of systemic security issues on the financial services sector; developing an awareness campaign for education and outreach to members of the sector, key stakeholders, and boards of directors; encouraging the role of insurance and other risk-management techniques to mitigate the impact of a cyber-attack; working with government to design and implement a shared coordinated management process for detecting and responding to systemic threats against the infrastructure; and exploring funding options to support the sector activities listed above. According to the strategy, achieving success within this framework will require resources from the entire financial services sector, which must be able to detect, respond to, and recover from cyber and physical infrastructure incidents in a coordinated manner. The strategy goes on to state that this requires a concerted, collaborative effort, not only on the part of the traditional members of the financial services sector and the insurance industry, but also on the part of the sector’s vendors, service providers, regulators, and legislators. Moreover, according to the strategy, the financial services sector recognizes that it is not within the capacity of any one individual institution or sector to adequately manage an isolated and independent response to current and future threats. Although the sector strategy establishes a framework to address CIP efforts, the financial services sector has not developed specific interim objectives; detailed tasks, timeframes, or responsibilities for implementation; or a process for monitoring progress. Without such information, there is an increased risk that the sector’s efforts will be unfocused, inefficient, and ineffective. For example, without clearly defined interim objectives and a process for monitoring progress, the success of efforts to complete the sector’s actions cannot be measured. Also, establishing detailed tasks and clarifying responsibilities can ensure a common understanding of how the strategy will be implemented, how the actions of organizations are interrelated, who should be held accountable for their success or failure, and whether they will effectively and efficiently support sector goals. The current sector coordinator stated that the recently formed FSSCC plans to review and update the financial services strategy, including consideration of the National Strategy for Homeland Security and the draft National Strategy to Secure Cyberspace, which were issued subsequent to the financial services sector’s strategy. In addition, FSSCC plans to determine what actions the sector needs to take, including the specific interim objectives; detailed tasks, timeframes, or responsibilities for implementation; and a process for monitoring progress to implement the strategy. Further, the financial services sector’s strategy does not discuss the coordination of efforts between the private sector and Treasury as sector liaison or other federal agencies in assessing sector vulnerabilities. According to Treasury officials, the FBIIC vulnerability assessment working group has identified critical entities in the U.S. wholesale financial system and examined the currency production and distribution process. In addition, there are ongoing FBIIC activities to examine other parts of the financial services industry, including the stock and bond markets, commodity futures trading markets, and retail payment systems. Further, FRS, OCC, and SEC (with the participation of the Federal Reserve Bank of New York and the New York State Banking Department) issued a draft white paper on August 30, 2002, that identified certain critical financial markets and proposed sound practices for strengthening the resilience of those markets. However, the strategy does not discuss how these efforts to assess sector vulnerabilities are to be coordinated. In response to PDD 63, the Financial Services ISAC (FS-ISAC) was formed in 1999. A private sector initiative by the banking and finance industry, FS- ISAC is currently composed of 61 members who maintain over 90 percent of the assets under control by the industry, according to FS-ISAC. The mission of FS-ISAC is to use information sharing and analysis to provide its members with a comprehensive set of knowledge resources. These resources include early notification of computer vulnerabilities and attacks and access to subject-matter expertise and other relevant information, such as trending analysis for all levels of management and for first responders to cyber incidents. FS-ISAC is a permanently staffed watch center that operates 24 hours a day, 7 days a week. It monitors cyber-related events around the world and acts as a clearinghouse for information that it distributes to its members. According to the current chairperson, FS-ISAC also works with other organizations that have similar missions, including NIPC; the U.S. Secret Service (extensively with the New York Electronic Crimes Task Force); and the Department of Defense’s Joint Task Force for Computer Network Operations. According to its former chairman, FS-ISAC demonstrated its effectiveness as an information dissemination vehicle in the way it handled the ILOVEYOU virus. In May 2000, we highlighted in testimony this example, in which FS-ISAC provided early notification to the industry when it collected reports on the spread of the ILOVEYOU virus and posted an alert to its members several hours before NIPC became aware of the threat. Since that time, according to its former chairman, FS-ISAC has been in the forefront of response to incidents such as Code Red and NIMDA, using its communication capabilities to provide early warning to its members as both viruses began to propagate through the Internet. According to FS-ISAC’s current chairperson, the financial services sector faces a number of challenges regarding the success of FS-ISAC, including how to share more information with the federal government and increase industry participation. Recognizing the need to share information across sectors, the national strategy for the financial services sector states that FS-ISAC should define requirements and processes for exchanging information across sectors. In order to increase the sector’s participation, the sector coordinator also has discussed the importance of enhancing FS- ISAC’s value to the sector and expanding its membership to include a greater proportion of the sector’s members. In April 2001, we reported that although FS-ISAC received information from NIPC, it had not provided information in return because of reporting incompatibilities and concerns about confidentiality. The sector’s national strategy discusses legal impediments to information sharing and public- private partnerships and offers possible solutions, including certain exemptions related to the Freedom of Information Act (FOIA), antitrust, and liability. The Homeland Security Act of 2002, signed by the President on November 25, 2002, includes provisions that restrict federal, state, and local government use and disclosure of critical infrastructure information that has been voluntarily submitted to the Department of Homeland Security. These restrictions include exemption from disclosure under FOIA, a general limitation on use to critical infrastructure protection purposes, and limitations on use in civil actions and by state or local governments. The act also provides penalties for any federal employee who improperly discloses any protected critical infrastructure information. At this time, it is too early to tell what impact the new law will have on the willingness of the private sector to share critical infrastructure information. Further, by June 2002, FS-ISAC and NIPC had signed a memorandum of understanding that established a formal agreement for sharing security- related information. This memorandum of understanding encourages information sharing between the two organizations and is designed to facilitate the flow of information between the private sector and the government. The former chairman of FS-ISAC stated that the agreement will enable “a two-way trusted exchange of information in order to analyze and disseminate actionable intelligence on threats, attacks, vulnerabilities, anomalies, and security best practices involving the banking and finance sector.” According to NIPC’s director, “the information sharing agreement with the FS-ISAC should significantly advance our mutual commitment to our economic security.” At the present time, FS-ISAC and NIPC conduct bi-weekly threat briefings, according to NIPC officials. The current FS- ISAC chairperson stated that FS-ISAC anticipates signing additional memorandums of understanding with various agencies of the government. The national strategy for the financial services sector calls for FS-ISAC to work with other associations in developing options to significantly increase participation in information exchange. In response, FS-ISAC is currently developing a “next-generation” model in which it would offer certain information dissemination services to the entire sector. According to the FS-ISAC chairperson, they are exploring various funding methods for this service, including funding by various financial services industry groups or the federal government. In addition, other more expanded services, including best practice development, log correlation and analysis, and threat modeling would be offered. A number of financial services industry groups, including the Financial Services Sector Coordinating Council (FSSCC) and the American Bankers Association (ABA), have taken steps to address cyber threats. These steps are discussed in general in the financial services sector’s strategy, including developing product certification programs, disaster recovery programs, and a national strategy for the sector. FSSCC, organized and chaired by the sector coordinator, held its inaugural meeting on June 19, 2002. Its mission is “to foster and facilitate the coordination of sectorwide voluntary activities and initiatives designed to improve CIP/Homeland Security.” To encourage active participation and commitment on the part of member organizations, FSSCC has been created as a limited liability corporation. As part of its efforts, FSSCC established the following objectives: provide broad industry representation for CIP and Homeland Security (HLS) and related matters for the financial services sector and for voluntary sectorwide partnership efforts; foster and promote coordination and cooperation among the participating sector’s constituencies on CIP/HLS related activities and initiatives; identify voluntary efforts where improvements in coordination can foster sector preparedness for CIP/HLS; establish and promote broad voluntary activities and initiatives within the sector that improve CIP/HLS; identify barriers to and recommend initiatives to improve sectorwide voluntary CIP/HLS information, knowledge sharing, and the timeliness of dissemination processes for critical information sharing among all the sector’s constituencies; and improve sector awareness of CIP/HLS issues, available information, sector activities/initiatives, and opportunities for improved coordination. One of the council’s main initiatives is to share information on CIP activities already being performed by member associations across the entire sector. According to the sector coordinator, FSSCC is targeting relevant trade associations to broaden its membership so that it can reach a greater proportion of the sector’s members. It will disseminate information about ongoing CIP activities to this target audience through council members. Furthermore, FSSCC is developing subcommittees and task groups to perform its work. Some of the initial strategic focus areas being considered are: information dissemination and information sharing, crisis management and response management coordination, sector outreach and cross-sector outreach, and knowledge sharing—e.g., best practices. According to FSSCC officials, it has begun working with other private sector entities and with Treasury to coordinate CIP efforts within the sector. In addition, according to the sector coordinator, the establishment of FBIIC provides a strong tool for coordination between the public and private sectors and a forum for financial institution regulators to present a consistent message to the private sector. The ABA—an industry group whose membership includes community, savings, regional, and money center banks; savings associations; trust companies; and diversified financial holding companies—has an ongoing program for informing its membership of cyber security issues and providing cyber security resources. For example, as a member of FSSCC, ABA is chairing a working group that is responsible for education and outreach initiatives. According to an ABA official, this initiative is designed to inform financial services institutions of existing organizations, including FS-ISAC, which can be used as resources for information regarding physical as well as cyber threats and vulnerabilities. A second aspect of the initiative is to garner feedback from institutions in the financial services sector as to how the process of sharing such information should evolve in terms of organization, services, and cost. Also in response to cyber security-related issues, ABA created the Safeguarding Customer Information Toolbox and made it available in October 2002 to assist ABA members in evaluating their information security and complying with Section 501(b) of the Gramm-Leach-Bliley Act of 1999. In addition, ABA holds interactive webcasts and conferences, distributes a bi-weekly electronic newsletter, the ABA eAlert, and provides a variety of resources related to information security through its Web site, at www.aba.com. BITS is The Technology Group for The Financial Services Roundtable. As part of its mandate, BITS strives to sustain consumer confidence and trust by ensuring the safety and security of financial transactions, and it has several initiatives under way to promote improved information security within the financial services industry. BITS’s and The Roundtable’s membership represents 100 of the largest integrated financial services institutions providing banking, insurance, and investment products and services to American consumers and corporate customers. According to BITS officials, BITS serves as the strategic expert and action-oriented entity for its member companies where commerce, financial services, and technology intersect. According to BITS officials, it is not a lobbying group for the financial services industry. BITS officials stated that it generally undertakes initiatives for the specific benefit of its member companies, but its efforts often affect the entire financial services industry through its members and through “affiliate” memberships that include other financial services industry groups such as ABA, the Independent Community Bankers of America, and the Credit Union National Association. In addition, most of BITS’s work, including best practices, voluntary guidelines, and business requirements, is made public on its Web site at www.bitsinfo.org and shared across the industry. BITS is also an active member of FSSCC, according to BITS officials. In addition to its work with other financial services industry groups, BITS works with various government agencies, including the President’s Critical Infrastructure Protection Board, Office of Cyberspace Security, Office of Homeland Security, CIAO, NIPC, and FBIIC to promote improved information security, best practices for business continuity, and management of relationships with third party service providers. BITS has a number of working groups on different topics—all of which have a security component. According to BITS, its working groups are made up of experts on the topics from the financial services industry and other participants as appropriate. Each working group has its own set of deliverables, which include self-regulatory requirements, guidelines and self-assessments, and timelines. To set direction and oversee all of BITS’s security-related activities, a Security and Risk Assessment Steering Committee (SRA) was established that is made up of the heads of information security of member organizations. BITS officials’ stated priorities include: defining and establishing metrics to measure operational risk—working in close coordination with FSSCC, FFIEC, and other regulatory agencies; providing security briefings/alerts and government outreach—including regularly sending out alerts to members, establishing an automated alert system for national emergencies, and reaching out to government representatives and other sector and business groups; providing, through the BITS Product Certification Program—designed to test products against baseline security criteria—a vehicle to significantly enhance safety and soundness by improving the security of technology products and reducing technology risk; issuing the BITS Framework for Managing Technology Risk for Information Technology (IT) Service Provider Relationships (Framework), which includes industry practices and regulatory requirements; establishing, with the Roundtable, a crisis management coordination initiative with the overarching objective of improving BITS’s member companies’ ability to prepare for and recover from significant industrywide disasters; and issuing a draft background paper, Telecommunications for Critical Financial Services: Risks and Recommendations. The Securities Industry Association (SIA) also has taken steps to address cyber threats. SIA has more than 600 member securities firms, including investment banks, broker-dealers, and mutual fund companies. According to the sector’s national strategy, SIA has a major business continuity planning effort under way to coordinate and develop industry plans for disaster and recovery. According to SIA officials, information about SIA’s business continuity planning activities can be found at: http://www.sia.com/business_continuity/. SIA has also established a virtual command center, which is to be activated when a significant disaster occurs. Before, during, and after such an event occurs, SIA plans for the command center to be the central point for communicating the status of the disaster and coordinating industry-related response activities for the securities industry. It also intends the command center to act as a liaison between city, state, and federal bodies. In addition, according to SIA, it holds awareness conferences for its member firms and works closely with industry infrastructure organizations, such as exchanges and depositories, and with other industries that its members rely on, such as telecommunications, power utilities, and municipal and state services. SIA is also an active member of FSSCC, through which it shares information with other financial trade associations and regulators through FBIIC. Sector representatives also identified other industry groups with initiatives related to critical infrastructure protection and information security in the financial services sector, including the following. The Financial Services Technology Consortium has had efforts under way since late 2001 involving critical business continuity and disaster recovery. For example, in October 2002, the Consortium initiated with its member financial institutions the development of a shared industry database and clearinghouse to match institutions with available disaster recovery space with those searching for space in a region different than their location. According to a Consortium official, the database will be available in the second quarter 2003. The official also stated that the Consortium’s goal is to reduce the time and cost required for financial institutions to find, acquire, and roll out qualified disaster recovery space and added that as a second phase the Consortium will initiate efforts to standardize disaster recovery space and related technologies across the industry. According to a Consortium official, more information is available on its Web site at www.fstc.org. The Accredited Standards Committee X9, Inc., develops specific standards related to data and information security for the financial services sector, including standards related to personal identification number management and security, data encryption use by the financial services industry, application of biometrics in banking, wireless financial transaction security, and privacy assessments. According to X9 officials, more information can be found on its Web site at www.x9.org. Several federal entities play critical roles in partnering with the financial services sector to protect its critical infrastructures. Under PDD 63, Treasury is designated the lead agency for the financial services sector and is responsible for coordinating the public/private partnership between this sector and the federal government. Treasury also chairs the Financial and Banking Information Infrastructure Committee of the President’s Critical Infrastructure Protection Board. The committee is responsible for coordinating federal and state financial regulatory efforts to improve the reliability and security of U.S. financial systems. In both of its roles, Treasury has taken steps designed to establish better relationships and methods of communication between regulators, assess vulnerabilities (as discussed earlier in this report), and improve communication within the financial services sector. In its role as sector liaison, Treasury has not undertaken a comprehensive assessment of the potential use of public policy tools—such as grants, tax incentives, and regulations—by the federal government to encourage increased private sector participation, as called for in federal CIP policy. In addition to Treasury efforts, other federal CIP-related entities have taken steps to encourage the participation of the financial services sector in CIP. To fulfill Treasury’s role in CIP, the Secretary of the Treasury designated the Assistant Secretary for Financial Institutions as the sector liaison for the financial services sector, who works with the sector coordinator—the private sector’s focal point for the industry. According to Treasury officials, Treasury strives to ensure that there are open lines of communication between the government and the private sector and voluntarily participates in industry groups of which Treasury is not an official member. For example, Treasury is involved with groups such as FSSCC, FS-ISAC, and BITS. Treasury also facilitates interaction between CIP Board committees and other government entities involved in CIP and seeks a role in coordinating government and private-sector efforts with the goal of eliminating unnecessary redundancy. In addition to serving as the sector liaison, Treasury’s Assistant Secretary for Financial Institutions also serves as the chair of FBIIC—a standing committee of the President’s Critical Infrastructure Protection Board that was established by Executive Order 13231 in October 2001 and was initiated by the Secretary of the Treasury in January 2002. It is charged with coordinating federal and state financial regulatory efforts to improve the reliability and security of U.S. financial systems. Members of FBIIC include representatives of the federal government’s financial regulatory agencies as well as state regulators. The committee also works with the sector coordinator to leverage industry initiatives and coordinate private-sector outreach related to CIP. Its members stated that, as part of its responsibilities, FBIIC has initiated a number of efforts. For example, it has initiated a number of working groups on various subjects, including vulnerability assessment, communications, international affairs, and legislative affairs. In addition, FBIIC developed a policy for Government Emergency Telecommunications Service (GETS) cards and is involved in increasing financial institution’s participation in the Telecommunications Service Priority (TSP) program. We plan to discuss FBIIC’s actions in response to the September 11, 2001, terrorist attacks in further detail in another report requested by this committee. FBIIC also held meetings among the regulatory agencies to share lessons learned about contingency planning operations and created a vulnerability assessment working group. In addition, it is working with the National Communications System and the Federal Communications Commission on telecommunications reliability and developing secure communication methods for regulatory agencies. Further, FBIIC representatives participate in private-sector professional conferences and seminars to promote CIP. Treasury and regulatory agency officials stated that a constructive relationship has been developed between Treasury, the regulators, and the financial services sector because of the mutual, long-standing efforts to improve the financial services industry and the assistance provided by the regulators when crises occur—such as during natural disasters. PDD 63 stated that sector liaisons should identify and assess economic incentives, such as public policy tools—grants, tax incentives, or regulation—to encourage desired CIP behavior in the sector. It further stated that “the incentives that the market provides are the first choice for addressing the problem of critical infrastructure protection; regulation will be used only in the face of a material failure of the market to protect the health, safety or well-being of the American people.” The National Strategy for Homeland Security reiterated the need to use all available policy tools to raise the security of the nation’s critical infrastructures. It discussed the possible need for incentives for the private sector to adopt security measures or invest in improved safety technologies. It also stated that the federal government will need to rely on regulation in some cases. In addition, the national strategy for the financial services sector recognized that the sector needs to explore funding options to support its activities. According to a Treasury official, the department has not undertaken a comprehensive assessment of the potential use of public policy tools to encourage the financial services sector in implementing CIP-related efforts. Treasury has instead focused on what it considers to be more important priorities, including establishing better relationships and methods of communication between regulators, performing vulnerability assessments, and establishing GETS policy. Without appropriate consideration of public policy tools, private sector participation in sector-related CIP efforts may not reach its full potential. Different models are being used in other critical infrastructure protection sectors for funding CIP activities. For example, the Environmental Protection Agency reported providing 449 grants to assist large drinking water utilities in developing vulnerability assessments, emergency response/operating plans, security enhancement plans and designs, or a combination of these efforts. In a different approach, the American Chemistry Council requires members to perform enhanced security activities, including vulnerability assessments. Other federal CIP entities coordinate with the financial services sector. For example, NIPC coordinates the efforts of the ISACs, including FS-ISAC. According to NIPC officials, the memorandum of understanding has already led to increased information sharing between NIPC and FS-ISAC. These officials informed us that most of the information sharing agreements with the ISACs contain cyber and physical incident reporting thresholds specific to the industry. In response to our previous recommendations, these officials also told us that a new ISAC development and support unit had been created, whose mission is to enhance cooperation and trust between the public and private sectors, resulting in a two-way sharing of information. In addition, the Department of Commerce’s CIAO is involved with outreach and education programs in the private sector. Because it is a national organization, CIAO covers the financial services sector as only one component of the nation’s critical infrastructure. CIAO officials stated that it is important to include financial services representatives in as many CIP activities as possible. CIAO works in part with the financial services sector to educate the public and raise its awareness of and participation in CIP efforts and to integrate infrastructure assurance objectives into both the public and private sectors. Finally, as previously mentioned, the President’s Special Advisor for Cyberspace Security chairs the Critical Infrastructure Protection Board and works closely with the federal government and the private sector to coordinate protection of the nation’s critical infrastructure information systems, including those in the financial services industry. The Special Advisor is also tasked with coordinating intergovernmental agency efforts to secure information systems. Several officials from the financial services sector told us that the Special Advisor has taken an active role in promoting governmental partnership efforts, enjoys a strong relationship with the financial services sector, and advocates initiatives sponsored by the private sector, such as BITS’s Product Certification Program. Federal regulators have taken several steps to address information security issues. These steps include consideration of information security risks in determining the scope of their examinations of financial institutions, development of guidance for examining information security and for protecting against cyber threats, and reviewing the practices of information technology service providers. Regulators have historically played a role in the oversight of the financial services sector. As part of that oversight, financial institution regulators and SEC have generally considered information security risks in determining the scope of their examinations. The purposes of such risk- based examinations vary and may not be specifically focused on critical infrastructure protection. For example, safety and soundness examinations of financial institutions include evaluating compliance with laws such as section 501(b) of the Gramm-Leach-Bliley Act. SEC’s examinations of securities exchanges, clearing organizations, and certain electronic communication networks are intended to determine whether they comply with SEC’s voluntary guidance, the Automation Review Policy program. The program is focused on certain operational issues, including information technology, of which information security is a part. SEC’s examinations of broker-dealers’ information technology were initiated in July 2001 as a result of the Gramm-Leach-Bliley Act. These examinations are targeted at the adequacy of safeguards against unauthorized disclosure of customer information. In addition, the nature and scope of information security evaluations at regulated entities varies. Regulators determine the scope of examinations through risk analysis and the examiner’s judgment. Consequently, because information security is considered in relation to other areas in determining the scope of the examination, it may receive only a limited review. Because we did not review bank examinations as part of our scope on this review, we were unable to independently determine how often and how extensively regulatory agencies reviewed information security at the entities they oversee. Nonetheless, through examinations, regulators obtain information about the adequacy of information security at certain individual financial institutions, which can be used to suggest improvements where appropriate. The nature and extent of such information varies and, according to a Treasury official, examinations are not integrated with the federal government’s CIP efforts. According to FFIEC officials, examinations by the FFIEC agencies—and their results—are confidential by law, and are therefore not shared between FFIEC member agencies or with non-FFIEC member agencies. For example, according to the Federal Reserve, information sharing is limited by banking laws, trade secret laws, and the Federal Reserve’s regulations. As discussed earlier in this report, Treasury has not undertaken a comprehensive assessment of the potential use of public policy tools, such as grants, tax incentives, and regulations (including regulations related to examinations). However, the National Strategy for Homeland Security reiterated the need to use all available policy tools to raise the security of the nation’s critical infrastructures. Other actions are being taken by regulators to address information security. FFIEC is in the process of updating its Information Systems Examination Handbook, which provides regulators with general guidance on information systems and other areas of technology examinations, such as business continuity, information security, electronic banking, vendor management, payment systems, and audit. Also, as discussed earlier in this report, FRS, OCC, and SEC (with the participation of the Federal Reserve Bank of New York and the New York State Banking Department) issued a draft white paper on August 30, 2002, that identified certain critical financial markets and proposed sound practices for strengthening the resilience of those markets. In addition, the regulators have issued over the years numerous guidance documents regarding information security. For example, in 2001, FFIEC agencies issued detailed enforceable guidelines to carry out the requirements set forth in Section 501(b) of the Gramm-Leach- Bliley Act regarding the safeguarding of customer information by insured depository institutions. We plan to discuss related actions taken by the regulators in response to the September 11, 2001, terrorist attacks in further detail in another report requested by this committee. The computer interconnectivity used by the financial services sector for customer services and operations poses significant information security risks to computer systems and to the critical operations and infrastructures they support. Moreover, the dependence of the financial services sector on other critical infrastructures poses additional risk. Industry groups in the financial services sector have taken several steps to share information on cyber threats and to address these threats, including developing a sector strategy. The strategy identifies a framework for sector actions necessary to identify, assess, and respond to sectorwide threats, including completing a sectorwide vulnerability assessment. However, the financial services industry has not developed detailed interim objectives; detailed tasks, timeframes, or responsibilities for implementation; or processes for measuring progress in implementing the sector’s strategy. Federal entities have taken a number of steps to coordinate federal government and private-sector efforts and to assist the financial services sector in its CIP effort, but Treasury has not undertaken a comprehensive assessment, as called for in federal CIP policy, of the potential use of public policy tools to encourage increased sector participation. Consideration of the need for public policy tools is important to encouraging private sector participation in sector-related CIP efforts, including implementation of the sector’s strategy. Finally, federal regulators have taken several steps to address information security issues, including consideration of information security risks in determining the scope of their examinations of financial institutions and development of guidance for examining information security and for protecting against cyber threats. To improve the likelihood of success of the financial services sector’s CIP efforts, we recommend that the Secretary of the Treasury direct the Assistant Secretary for Financial Institutions, the banking and finance sector liaison, to coordinate with the industry in its efforts to update the sector’s National Strategy for Critical Infrastructure Assurance and in establishing interim objectives, detailed tasks, timeframes, and responsibilities for implementing it and a process for monitoring progress. As part of these efforts, the Assistant Secretary should assess the need for grants, tax incentives, regulation, or other public policy tools to assist the industry in meeting its goals. We received written comments on a draft of this report from the Department of the Treasury and the Securities and Exchange Commission (see apps. II and III, respectively). In Treasury’s response, the Assistant Secretary for Financial Institutions highlighted the department’s efforts to meet its CIP responsibilities. In addition, he recognized the need to continue to work with the sector to increase its resiliency, including consideration of appropriate incentives. In the Securities and Exchange Commission response, the Director of the Division of Market Regulation and the Director of Compliance Inspections and Examinations stated that they look forward to working with Treasury to implement the recommendations. We also received technical comments from the Federal Deposit Insurance Corporation, the FBI’s National Infrastructure Protection Center, the Federal Reserve, the Office of the Comptroller of the Currency, and the Securities and Exchange Commission. In addition, we received written and oral technical comments from ABA, BITS, FS-ISAC, FSSCC, the Financial Services Sector Coordinator, and SIA. Comments from all of these organizations have been incorporated into the report, as appropriate. The Department of Commerce’s CIAO, Office of Thrift Supervision, and the National Credit Union Association reviewed a draft of the report and had no comments. As we agreed with your staff, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies of this report to other interested congressional committees and the heads of the agencies discussed in this report, as well as the private-sector participants and other relevant agencies. In addition, this report will be available at no charge on our Web site at http://www.gao.gov. If you or your offices have any questions about matters discussed in this report, please contact me at (202) 512-3317 or Michael Gilmore at (202) 512-9374. We can also be reached by e-mail at daceyr@gao.gov or gilmorem@gao.gov, respectively. Key contributors to this report are listed in appendix IV. Our objectives were to identify the (1) general nature of the cyber threats faced by the financial services industry; (2) steps the financial services industry has taken to share information on and to address threats, vulnerabilities, and incidents; (3) relationship between government and private sector efforts to protect the financial services industry’s critical infrastructures; and (4) actions financial regulators have taken to address these cyber threats. To accomplish these objectives, we reviewed relevant documents, policy, and directives and interviewed pertinent officials from federal agencies and the private sector involved in efforts to enhance the security of the financial services industry. To determine the general nature of the cyber threats faced by the financial services industry, we reviewed relevant reports, such as the 1997 report of the President’s Commission on Critical Infrastructure Protection and the sector’s strategy, Defending America’s Cyberspace: Banking and Finance Sector: The National Strategy for Critical Infrastructure Assurance, Version 1.0, May 13, 2002. We also reviewed documentation or interviewed officials from industry groups, including the American Bankers Association (ABA), the BITS Technology Group, the Financial Services Information Sharing and Analysis Center (FS-ISAC), and the Financial Services Sector Coordinating Council (FSSCC). In addition, we held discussions with officials at the Department of Commerce’s Critical Infrastructure Assurance Office (CIAO), the National Infrastructure Protection Center (NIPC) at the Federal Bureau of Investigation (FBI), the Department of the Treasury’s Office of the Assistant Secretary for Financial Institutions, the Federal Financial Institutions Examinations Council (FFIEC) and its member agencies, the Financial and Banking Information Infrastructure Committee (FBIIC), and the Securities and Exchange Commission (SEC), among others. To determine the steps the financial services industry has taken to share information on and to address threats, vulnerabilities, and incidents, we reviewed relevant sectorwide documents, such as the sector’s strategy, Defending America’s Cyberspace: Banking and Finance Sector: The National Strategy for Critical Infrastructure Assurance, Version 1.0, May 13, 2002, and documents from industry groups, such as FSSCC and FS-ISAC. We also held discussions with the banking and finance sector coordinator, ABA, and BITS. To determine the relationship between government and private sector efforts to protect the financial services industry’s critical infrastructures, we reviewed relevant documents, including prior GAO reports and testimonies, and held discussions with federal officials from CIAO, NIPC, the Department of the Treasury’s Office of the Assistant Secretary for Financial Institutions, FFIEC, FBIIC, and SEC. In addition, we interviewed officials from industry groups, including ABA and BITS, as well as the banking and finance sector coordinator. To determine the actions financial regulators have taken to address these cyber threats, we reviewed relevant reports, guidelines, and policies, such as FFIEC’s Information Systems Examination Handbook. We also interviewed officials from the Treasury’s Office of the Assistant Secretary for Financial Institutions, FFIEC, FBIIC, SEC, and the Board of Governors of the Federal Reserve System. We performed our work in Washington, D.C., from July to November 2002 in accordance with generally accepted government auditing standards. We did not evaluate the frequency or extent of examinations performed by the federal regulators or SEC. Key contributors to this report include Michael Gilmore, Cody Goebel, Joanne Fiorino, Dave Hinchman, Daniel Hoy, Nick Marinos, James McDermott, Dave Powner, Jamelyn Smith, and Karen Tremba. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
Since 1998, the federal government has taken steps to protect the nation's critical infrastructures, including developing partnerships between the public and private sectors. These cyber and physical public and private infrastructures, which include the financial services sector, are essential to national security, economic security, and/or public health and safety. GAO was asked to review (1) the general nature of the cyber threats faced by the financial services industry; (2) steps the financial services industry has taken to share information on and to address threats, vulnerabilities, and incidents; (3) the relationship between government and private sector efforts to protect the financial services industry's critical infrastructures; and (4) actions financial regulators have taken to address these cyber threats. The types of cyber threats that the financial services industry faces are similar to those faced by other critical infrastructure sectors: attacks from individuals and groups with malicious intent, such as crime, terrorism, and foreign intelligence. However, the potential for monetary gains and economic disruptions may increase its attractiveness as a target. Financial services industry groups have taken steps and plan to take continuing action to address cyber threats and improve information sharing. First, industry representatives, under the sponsorship of the U.S. Department of the Treasury, collaboratively developed a sector strategy which discusses additional efforts necessary to identify, assess, and respond to sector-wide threats. However, the financial services sector has not developed detailed plans for implementing its strategy. Second, the private sector's Financial Services Information Sharing and Analysis Center was formed to facilitate sharing of cyber-related information. Third, several other industry groups are taking steps to better coordinate industry efforts and to improve information security across the sector. Several federal entities play critical roles in partnering with the private sector to protect the financial services industry's critical infrastructures. For example, the Department of the Treasury is the sector liaison for coordinating public and private efforts and chairs the federal Financial and Banking Information Infrastructure Committee, which coordinates regulatory efforts. As part of its efforts, Treasury has taken steps designed to establish better relationships and methods of communication between regulators, assess vulnerabilities, and improve communications within the financial services sector. In its role as sector liaison, Treasury has not undertaken a comprehensive assessment of the potential use of public policy tools by the federal government to encourage increased participation by the private sector. The table below shows the key public and private organizations involved in critical infrastructure protection. Federal regulators, such as the Federal Reserve System and the Securities and Exchange Commission, have taken several steps to address information security issues. These include consideration of information security risks in determining the scope of their examinations of financial institutions and development of guidance for examining information security and for protecting against cyber threats.
The way DOD develops and produces its major weapons systems has had disappointing outcomes. There is a vast difference between DOD’s budgeting plans and the reality of the cost of its systems. Performance, if it is defined as the capability that actually reaches the warfighter, often falls short, as cost increases result in fewer quantities of produced systems and schedule slips. Performance, if it is defined as an acceptable return on investment, has not lived up to promises. Table 1 illustrates seven programs with a significant reduction in buying power; we have reported similar outcomes in many more programs. For example, the Air Force initially planned to buy 648 F/A-22 Raptor tactical aircraft at a program acquisition unit cost of about $125 million (fiscal year 2006 dollars). Technology and design components matured late in the development of the aircraft, which contributed to cost growth and schedule delays. Now, the Air Force plans to buy 181 aircraft at a program acquisition unit cost of about $361 million, an almost 189 percent increase. Furthermore, the conventional acquisition process is not agile enough for today’s demands. Congress has expressed concern that urgent warfighting requirements are not being met in the most expeditious manner and has put in place several authorities for rapid acquisition to work around the process. The U.S. Joint Forces Command’s Limited Acquisition Authority and the Secretary of Defense’s Rapid Acquisition Authority seek the ability to get warfighting capability to the field faster. According to U.S. Joint Forces Command officials, it is only through Limited Acquisition Authority that the command has the authority to satisfy the unanticipated, unbudgeted, urgent mission needs of other combatant commands. With a formal process that requires as many as 5, 10, or 15 years to get from program start to production, such experiments are needed to meet the warfighters’ needs. Today we are at a crossroad. Our nation is on an unsustainable fiscal path. Long-term budget simulations by GAO, the Congressional Budget Office, and others show that, over the long term, we face a large and growing structural deficit due primarily to known demographic trends and rising health care costs. Continuing on this unsustainable fiscal path will gradually erode, if not suddenly damage, our economy, our standard of living, and ultimately our national security. Federal discretionary spending, along with other federal policies and programs, will face serious budget pressures in the coming years stemming from new budgetary demands and demographic trends. Defense spending falls within the discretionary spending accounts. Further, current military operations, such as those in Afghanistan and Iraq, consume a large share of DOD resources and are causing faster wear on existing weapons. Refurbishment or replacement sooner than planned is putting further pressure on DOD’s investment accounts. At the same time DOD is facing these problems, programs are commanding larger budgets. DOD is undertaking new efforts that are expected to be the most expensive and complex ever and on which DOD is heavily relying to fundamentally transform military operations. And it is giving contractors increased program management responsibilities to develop requirements, design products, and select major system and subsystem contractors. Table 2 shows that just 5 years ago, the top five weapon systems cost about $291 billion combined; today, the top five weapon systems cost about $550 billion. If these megasystems are managed with traditional margins of error, the financial consequences can be dire, especially in light of a constrained discretionary budget. Success for acquisitions means making sound decisions to ensure that program investments are getting promised returns. In the commercial world, successful companies have no choice but to adopt processes and cultures that emphasize basing decisions on knowledge, reducing risks prior to undertaking new efforts, producing realistic cost and schedule estimates, and building-in quality in order to deliver products to customers at the right price, the right time, and the right cost. At first blush, it would seem DOD’s definition of success would be very similar: deliver capability to the warfighter at the right price, the right time, and the right cost. However, this is not happening within DOD. In an important sense, success has come to mean starting and continuing programs even when cost, schedule, and quantities must be sacrificed. DOD knows what to do to improve acquisitions but finds it difficult to apply the controls or assign the accountability necessary for successful outcomes. To understand why these problems persist, we must look not just at the product development process but at the underlying requirements and budgeting processes to define problems and find solutions. Over the last several years, we have undertaken a body of work that examines weapon acquisition issues from a perspective that draws upon lessons learned from best product development practices. Leading commercial firms expect that their program managers will deliver high- quality products on time and within budget. Doing otherwise could result in the customer walking away. Thus, those firms have created an environment and adopted practices that put their program managers in a good position to succeed in meeting these expectations. Collectively, these practices comprise a process that is anchored in knowledge. It is a process in which technology development and product development are treated differently and managed separately. The process of developing technology culminates in discovery—the gathering of knowledge—and must, by its nature, allow room for unexpected results and delays. Leading firms do not ask their product managers to develop technology. Successful programs give responsibility for maturing technologies to a science and technology organizations, rather than the program or product development managers. The process of developing a product culminates in delivery, and, therefore, gives great weight to design and production. The firms demand—and receive—specific knowledge about a new product before production begins. A program does not go forward unless a strong business case on which the program was originally justified continues to hold true. Successful product developers ensure a high level of knowledge is achieved at key junctures in development. We characterize these junctures as knowledge points. These knowledge points and associated indicators are defined as follows: Knowledge point 1: Resources and needs match. This point occurs when a sound business case is made for the product—that is, a match is made between the customer’s requirements and the product developer’s available resources in terms of knowledge, time, money, and capacity. Achieving a high level of technology maturity at the start of system development is an important indicator of whether this match has been made. This means that the technologies needed to meet essential product requirements have been demonstrated to work in their intended environment. Knowledge point 2: Product design is stable. This point occurs when a program determines that a product’s design is stable— that is, it will meet customer requirements, as well as cost, schedule and reliability targets. A best practice is to achieve design stability at the system-level critical design review, usually held midway through development. Completion of at least 90 percent of engineering drawings at the system design review provides tangible evidence that the design is stable. Knowledge point 3: Production processes are mature. This point is achieved when it has been demonstrated that the company can manufacture the product within cost, schedule, and quality targets. A best practice is to ensure that all key manufacturing processes are in statistical control—that is, they are repeatable, sustainable, and capable of consistently producing parts within the product’s quality tolerances and standards—at the start of production. A result of this knowledge-based process is evolutionary product development, an incremental approach that enables developers to rely more on available resources rather than making promises about unproven technologies. Predictability is a key to success as successful product developers know that invention cannot be scheduled and its cost is difficult to estimate. They do not bring technology into new product development unless that technology has been demonstrated to meet the user’s requirements. Allowing technology development to spill over into product development puts an extra burden on decision makers and provides a weak foundation for making product development estimates. While the user may not initially receive the ultimate capability under this approach, the initial product is available sooner and at a lower, more predictable cost. There is a synergy in this process, as the attainment of each successive knowledge point builds on the preceding one. Metrics gauge when the requisite level of knowledge has been attained. Controls are used to attain a high level of knowledge before making additional significant investments. Controls are considered effective if they are backed by measurable criteria and if decision makers are required to consider them before deciding to advance a program to the next level. Effective controls help decision makers gauge progress in meeting cost, schedule, and performance goals and ensure that managers will (1) conduct activities to capture relevant product development knowledge, (2) provide evidence that knowledge was captured, and (3) hold decision reviews to determine that appropriate knowledge was captured to move to the next phase. The result is a product development process that holds decision makers accountable and delivers excellent results in a predictable manner. A hallmark of an executable program is shorter development cycle times, which allow more systems to enter production more quickly. DOD itself suggests that product development should be limited to about 5 years. Time constraints, such as this, are important because they serve to limit the initial product’s requirements. Limiting product development cycle times to 5 years or less would allow for more frequent assimilation of new technologies into weapon systems, speeding new technology to the warfighter, hold program managers accountable, as well as make more frequent and predictable work in production, where contractors and the industrial base can profit by being efficient. DOD’s policy adopts the knowledge-based, evolutionary approach used by leading commercial companies that enables developers to rely more on available resources rather than making promises about unproven technologies. The policy provides a framework for developers to ask themselves at key decision points whether they have the knowledge they need to move to the next phase of acquisition. For example, DOD Directive 5000.1 states that program managers “shall provide knowledge about key aspects of a system at key points in the acquisition process,” such as demonstrating “technologies in a relevant environment … prior to program initiation.” This knowledge-based framework can help managers gain the confidence they need to make significant and sound investment decisions for major weapon systems. In placing greater emphasis on evolutionary product development, the policy sets up a more manageable environment for achieving knowledge. However, the longstanding problem of programs beginning development with immature technologies is continuing to be seen on even the newest programs. Several programs approved to begin product development within only the last few years began with most of their technologies immature and have already experienced significant development cost increases. In the case of the Army’s Future Combat Systems, nearly 2 years after program launch and with $4.6 billion invested, only 1 out of more than 50 critical technologies is considered mature and the research and development cost estimate has grown by 48 percent. In March 2005, we reported that very few programs—15 percent of the programs we assessed—began development having demonstrated high levels of technology maturity. Acquisition unit costs for programs leveraging mature technologies increased by less than 1 percent, whereas programs that started development with immature technologies experienced an average acquisition unit cost increase of nearly 21 percent over the first full estimate. The decision to start a new program is the most highly leveraged point in the product development process. Establishing a sound business case for individual programs depends on disciplined requirements and funding processes. Our work has shown that DOD’s requirements process generates more demand for new programs than fiscal resources can support. DOD compounds the problem by approving so many highly complex and interdependent programs. Moreover, once a program is approved, requirements can be added along the way that increases costs and risks. Once too many programs are approved to start, the budgeting process exacerbates problems. Because programs are funded annually and department wide, cross-portfolio priorities have not been established, competition for funding continues over time, forcing programs to view success as the ability to secure the next funding increment rather than delivering capabilities when and as promised. As a result, there is pressure to suppress bad news about programs, which could endanger funding and support, as well as to skip testing because of its high cost. Concurrently, when faced with budget constraints, senior officials tend to make across- the-board cuts to all programs rather than make the hard decisions as to which ones to keep and which ones to cancel or cut back. In many cases, the system delivers less performance than promised when initial investment decisions were made. So, the condition we encounter time after time describes a predictable outcome. The acquisition environment encourages launching product developments that embody more technical unknowns and less knowledge about the performance and production risks they entail. A new weapon system is encouraged to possess performance features that significantly distinguish it from other systems and promises the best capability. A new program will not be approved unless its costs fall within forecasts of available funds and, therefore, looks affordable. Because cost and schedule estimates are comparatively soft at the time, successfully competing for funds encourages the program’s estimates to be squeezed into the funds available. Consequently, DOD program managers have incentives to promote performance features and design characteristics that rely on immature technologies and decision makers lack the knowledge they need to make good decisions. A path can be laid out to make decisions that will lead to better program choices and better outcomes. Much of this is known and has been recommended by one study or another. GAO itself has issued hundreds of reports. The key recommendations we have made have been focused on the product development process: constraining individual program requirements by working within available resources and by leveraging systems engineering; establishing clear business cases for each individual investment; enabling science and technology organizations to shoulder the ensuring that the workforce is capable of managing requirements trades, source selection, and knowledge-based acquisition strategies; and establishing and enforcing controls to ensure that appropriate knowledge is captured and used at critical junctures before moving programs forward and investing more money. As I have outlined above, however, setting the right conditions for successful acquisitions outcomes goes beyond product development. We are currently examining how to bring discipline to the Department’s requirements and budgetary process and the role played by the program manager. As we conduct this work, we will be asking who is currently accountable for acquisition decisions; who should be held accountable; how much deviation from the original business case is allowed before the entire program investment is reconsidered; and what is the penalty when investments do not result in meeting promised warfighter needs? We can make hard, but thoughtful, decisions now or postpone them, allowing budgetary realities to force draconian decisions later. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or other members of the subcommittee may have. For further information regarding this testimony, please contact Katherine V. Schinasi at (202) 512-4841 or schinasik@gao.gov. Individuals making key contributions to this testimony included Paul L. Francis, David B. Best, David J. Hand, Alan R. Frazier, Adam Vodraska, and Lily J. Chin. Space Acquisitions: Stronger Development Practices and Investment Planning Needed to Address Continuing Problems. GAO-05-891T. Washington, D.C.: July 12, 2005. Air Force Procurement: Protests Challenging Role of Biased Official Sustained. GAO-05-436T. Washington, D.C.: April 14, 2005. Tactical Aircraft: F/A-22 and JSF Acquisition Plans and Implications for Tactical Aircraft Modernization. GAO-05-591T. Washington, D.C.: April 6, 2005. Defense Acquisitions: Assessments of Selected Major Weapon Programs. GAO-05-301. Washington, D.C.: March 31, 2005. Defense Acquisitions: Future Combat Systems Challenges and Prospects for Success. GAO-05-428T. Washington, D.C.: March 16, 2005. Defense Acquisitions: Stronger Management Practices Are Needed to Improve DOD’s Software-Intensive Weapon Acquisitions. GAO-04-393. Washington, D.C.: March 1, 2004. Defense Acquisitions: DOD’s Revised Policy Emphasizes Best Practices, but More Controls Are Needed. GAO-04-53. Washington, D.C.: November 10, 2003. Best Practices: Setting Requirements Differently Could Reduce Weapon Systems’ Total Ownership Costs. GAO-03-57. Washington, D.C.: February 11, 2003. Best Practices: Capturing Design and Manufacturing Knowledge Early Improves Acquisition Outcomes. GAO-02-701. Washington, D.C.: July 15, 2002. Defense Acquisitions: DOD Faces Challenges in Implementing Best Practices. GAO-02-469T. Washington, D.C.: February 27, 2002. Best Practices: Better Matching of Needs and Resources Will Lead to Better Weapon System Outcomes. GAO-01-288. Washington, D.C.: March 8, 2001. Best Practices: A More Constructive Test Approach Is Key to Better Weapon System Outcomes. GAO/NSIAD-00-199. Washington, D.C.: July 31, 2000. Defense Acquisition: Employing Best Practices Can Shape Better Weapon System Decisions. GAO/T-NSIAD-00-137. Washington, D.C.: April 26, 2000. Best Practices: DOD Training Can Do More to Help Weapon System Program Implement Best Practices. GAO/NSIAD-99-206. Washington, D.C.: August 16, 1999. Best Practices: Better Management of Technology Development Can Improve Weapon System Outcomes. GAO/NSIAD-99-162. Washington, D.C.: July 30, 1999. Defense Acquisitions: Best Commercial Practices Can Improve Program Outcomes. GAO/T-NSIAD-99-116. Washington, D.C.: March 17, 1999. Defense Acquisition: Improved Program Outcomes Are Possible. GAO/T-NSIAD-98-123. Washington, D.C.: March 18, 1998. Best Practices: Successful Application to Weapon Acquisition Requires Changes in DOD’s Environment. GAO/NSIAD-98-56. Washington, D.C.: February 24, 1998. Major Acquisitions: Significant Changes Underway in DOD’s Earned Value Management Process. GAO/NSIAD-97-108. Washington, D.C.: May 5, 1997. Best Practices: Commercial Quality Assurance Practices Offer Improvements for DOD. GAO/NSIAD-96-162. Washington, D.C.: August 26, 1996. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Defense (DOD) is shepherding a portfolio of major weapon systems valued at about $1.3 trillion. How DOD is managing this investment has been a matter of concern for some time. Since 1990, GAO has designated DOD's weapon system acquisitions as a high-risk area for fraud, waste, abuse, and mismanagement. DOD has experienced cost overruns, missed deadlines, performance shortfalls, and persistent management problems. In light of the serious budget pressures facing the nation, such problems are especially troubling. GAO has issued hundreds of reports addressing broad-based issues, such as best practices, as well as reports focusing on individual acquisitions. These reports have included many recommendations. Congress asked GAO to testify on possible problems with and improvements to defense acquisition policy. In doing so, we highlight the risks of conducting business as usual and identify some of the solutions we have found in successful acquisition programs and organizations. DOD is facing a cascading number of problems in managing its acquisitions. Cost increases incurred while developing new weapon systems mean DOD cannot produce as many of those weapons as intended nor can it be relied on to deliver to the warfighter when promised. Military operations in Afghanistan and Iraq are consuming a large share of DOD resources and causing the department to invest more money sooner than expected to replace or fix existing weapons. Meanwhile, DOD is intent on transforming military operations and has its eye on multiple megasystems that are expected to be the most expensive and complex ever. These costly conditions are running head-on into the nation's unsustainable fiscal path. DOD knows what to do to achieve more successful outcomes but finds it difficult to apply the necessary discipline and controls or assign much-needed accountability. DOD has written into policy an approach that emphasizes attaining a certain level of knowledge at critical junctures before managers agree to invest more money in the next phase of weapon system development. This knowledge-based approach results in evolutionary--that is, incremental, manageable, predictable--development and inserts several controls to help managers gauge progress in meeting cost, schedule, and performance goals. But DOD is not employing the knowledge-based approach, discipline is lacking, and business cases are weak. Persistent practices show a decided lack of restraint. DOD's requirements process generates more demand for new programs than fiscal resources can support. DOD compounds the problem by approving so many highly complex and interdependent programs. Once too many programs are approved to start, the budgeting process exacerbates problems. Because programs are funded annually and departmentwide, cross-portfolio priorities have not been established, competition for funding continues over time, forcing programs to view success as the ability to secure the next funding increment rather than delivering capabilities when and as promised. Improving this condition requires discipline in the requirements and budgetary processes. Determining who should be held accountable for deviations and what penalties are needed is crucial. If DOD cannot discipline itself now to execute programs within fiscal realities, then draconian, budget-driven decisions may have to be made later.
The Coast Guard is a multimission, maritime military service within DHS. The Coast Guard’s responsibilities fall into two general categories—those related to homeland security missions, such as port security and vessel escorts, and those related to non–homeland security missions, such as search and rescue and polar ice operations. To carry out these responsibilities, the Coast Guard operates a number of vessels and aircraft and, through its Deepwater Program, is currently modernizing or replacing those assets. At the start of the Deepwater Program in the late 1990s, the Coast Guard chose to use a system-of-systems acquisition strategy. A system-of-systems is defined as a set or arrangement of assets that results when independent assets are integrated into a larger system that delivers unique capabilities. As the systems integrator, ICGS was responsible for designing, constructing, deploying, supporting, and integrating the Deepwater assets into a system-of-systems. Under this approach, the Coast Guard provided the contractor with broad, overall performance specifications—such as the ability to interdict illegal immigrants—and ICGS determined the asset specifications. According to Coast Guard officials, the ICGS proposal was submitted and priced as a package; that is, the Coast Guard bought the entire solution and could not reject any individual component. In November 2006, the Coast Guard submitted a cost, schedule, and performance baseline to DHS that established the total acquisition cost of the ICGS solution at $24.2 billion and projected that the acquisition would be completed in 2027. In May 2007, shortly after the Coast Guard had announced its intention to take over the role of systems integrator, DHS approved the baseline. Table 1 describes in more detail the assets the Coast Guard is planning to procure according to approved baselines. In deciding to take over the systems integrator role from ICGS, the Coast Guard has taken steps to increase government control and accountability by, among other things, applying the disciplined program management processes in its Major Systems Acquisition Manual (MSAM) to Deepwater assets. The MSAM requires documentation and approval of acquisition decisions at key points in a program’s life-cycle by designated officials at high levels. The Coast Guard has established a number of goals and deadlines for completing these activities in its Blueprint for Acquisition Reform, which was initially released in July 2007 and was last updated in July 2008. The Coast Guard has taken three major steps to become the systems integrator for the Deepwater Program. It has defined and assigned systems integrator functions to Coast Guard stakeholders, begun to reassess the capabilities and mix of assets it requires, and significantly reduced the contractual responsibilities of ICGS. While the Coast Guard has made progress in applying the disciplined MSAM acquisition process to its Deepwater assets, it did not meet its goal of being fully compliant by the second quarter of fiscal year 2009. In the meantime, the Coast Guard continues with production of certain assets and award of new contracts in light of what it views as pressing operational needs. The role of systems integrator involves planning, organizing, and integrating a mix of assets into a system-of-systems capability greater than the sum of the capabilities of the individual parts. ICGS’s role as systems integrator for the Deepwater Program included requirements management, systems engineering, and defining how assets would be employed by Coast Guard users in an operational setting. In addition, the contractor had technical authority over all asset design and configuration decisions. In 2008, the Coast Guard acknowledged that in order to assume the role of systems integrator, it needed to define systems integrator functions and assign them to Coast Guard stakeholders. Through codified changes to internal relationships, policies, and contractual arrangements, the Coast Guard has done so. For example, the Coast Guard formally designated certain directorates as technical authorities to establish, monitor, and approve technical standards for Deepwater assets related to design, construction, maintenance, logistics, C4ISR, and life-cycle staffing and training. The Coast Guard’s capabilities directorate determines operational requirements and the asset mix to satisfy those requirements and establishes priorities. This directorate is expected to collaborate with the technical authorities to ensure that the Coast Guard’s technical standards are incorporated during the requirements development process. Further, the acquisition directorate’s program and project managers are to be held accountable for ensuring that the assets it procures fulfill operational requirements and the technical authority standards. The relationships between Coast Guard directorates in executing their systems integrator roles are represented graphically in figure 1. When it contracted with ICGS, the Coast Guard had limited insight into how the contractor’s proposed solution would meet overall mission needs, limiting its ability to justify the proposed solution and make informed decisions about possible trade-offs. To improve its insight, the capabilities directorate has initiated a fundamental reassessment of the capabilities and mix of assets the Coast Guard needs to fulfill its Deepwater missions. The goals of this fleet mix analysis include validating mission performance requirements and revisiting the number and mix of all assets that are part of the Deepwater Program. A specific part of the study will be to analyze alternatives and quantities for the Offshore Patrol Cutter, an asset which accounts for a projected $8 billion of the total Deepwater costs. According to an official, the results of this analysis are expected in the summer of 2009. Coast Guard leadership plans to assess the results and make future procurement decisions based on the analysis. In conjunction with its assuming the role of systems integrator, the Coast Guard has significantly reduced the scope of work on contract with ICGS. In March 2009, the Coast Guard issued a task order to ICGS limited to tasks such as data management and quality assurance for assets currently under contract with ICGS including C4ISR, the Maritime Patrol Aircraft (MPA), and the National Security Cutter (NSC). The Coast Guard is currently developing plans to transition these functions from ICGS to the Coast Guard or an independent third party by February 2011 when this task order expires. For assets procured or planned to be procured outside of the ICGS contract such as the Offshore Patrol Cutter, systems engineering and program management functions are expected to be carried out by the Coast Guard with support from third parties and contractors. According to officials, the Coast Guard has no plans to award additional orders to ICGS for systems integrator functions within the current award term or for any work after the award term expires in January 2011. Since our June 2008 report on the Deepwater Program, and taking into account our recommendation, the Coast Guard has improved its MSAM process. For example, the process now dictates that the acquisition project and program managers work collaboratively with the technical authorities as described above. The MSAM process was revised to require acquisition planning and an analysis of alternatives for procurement to start at an earlier stage, which is intended to help inform the budget and planning processes. Other improvements include the adoption of our recommendation for a formal design review, Milestone 2A, before authorizing low-rate initial production. The MSAM phases and milestones are shown in figure 2. Because the Coast Guard previously exempted Deepwater from the MSAM process, assets were procured without following a disciplined program management approach. Recognizing the importance of ensuring that each acquisition project is managed through sustainable and repeatable processes and wanting to adhere to proven acquisition procedures, in July 2008, the Coast Guard set a goal of completing the MSAM acquisition management activities for all Deepwater assets by the second quarter of fiscal year 2009. However, of the 12 Deepwater assets in the concept and technology development phase or later, 9 are behind plan in terms of MSAM compliance. In the meantime, the Coast Guard has proceeded with production and awarded new contracts without all of the knowledge it needs to ensure that the capabilities it is buying will meet Coast Guard needs within cost and schedule constraints. For assets already in production, such as the MPA and the NSC, the Coast Guard has made some progress in the past year in retroactively developing acquisition documentation with the intent of providing the traceability from mission needs to operational performance that was previously lacking. For example, the Coast Guard approved an operational requirements document for the MPA in October 2008 to establish a formal performance baseline and identify attributes for testing. Through this process, the Coast Guard discovered that ICGS’s requirement for operational availability (the amount of time that an aircraft is available to perform missions) was excessive compared to the Coast Guard’s own standards. According to a Coast Guard official, the ICGS requirement would have needlessly increased costs to maintain and operate the aircraft. Even as the Coast Guard gains this additional knowledge about MPA requirements, it is continuing with this procurement despite not having completed operational testing. According to the MSAM, testing in an operational environment should be completed with the initial production variants of an asset to demonstrate that capabilities meet requirements before committing to larger purchases. An approved test plan helps ensure that the tests conducted are clearly linked to requirements and mission needs. While the MPA began an operational assessment in July 2008, the Coast Guard still lacked, as of March 2009, a test plan approved by DHS and endorsed by its independent test authority, the Navy’s Commander Operational Testing and Evaluation Force. With 11 of 36 MPAs already on contract, the Coast Guard has completed the operational assessment but does not plan to complete operational testing until the fiscal year 2011 time frame. Similarly, according to Coast Guard officials, operational testing of the NSC, also conducted by the Coast Guard’s independent test authority, has begun in the absence of an approved test plan, which is now expected in July 2009. By the time testing is scheduled to be completed in 2011, the Coast Guard plans to have six of eight NSCs either built or on contract. According to the MSAM process, operational requirements must be approved before procuring an asset. However, since committing to the MSAM process, the Coast Guard has awarded new contracts for assets without having all required acquisition documentation in place, due to its determination that the need for these capabilities is pressing. This situation puts the Coast Guard at risk of cost overruns and schedule slips if it turns out that what it is buying does not meet requirements. In September 2008, after conducting a full and open competition, the Coast Guard awarded an $88.2 million contract for the design and construction of a lead Fast Response Cutter. However, the Coast Guard does not have an approved operational requirements document or test plan for this asset. Recognizing the risks inherent in this approach, the Coast Guard developed a basic requirements document and an acquisition strategy based on procuring a proven design. These documents were reviewed and approved by the Coast Guard’s capabilities directorate, the engineering and logistics directorate, and chief of staff before the procurement began. According to a Coast Guard official, the Coast Guard intends to have an approved operational requirements document before procuring additional ships. In February 2009, the Coast Guard issued a $77.7 million task order to ICGS for a second segment of C4SIR design and development, before developing its requirements for performance. Design and development costs for the first segment increased from $55.5 million to $141.3 million. According to Coast Guard officials, this increase was due in part to the structure of the ICGS contract under which the Coast Guard lacked visibility into the software development processes and requirements. Furthermore, ICGS’s C4ISR solution for the Deepwater Program contains proprietary software. The Coast Guard has acquired data rights to the software and, according to Coast Guard officials, has determined that the capabilities it is buying meet Coast Guard technical standards for maintenance, logistics, and interoperability. Since the establishment of the $24.2 billion baseline for the Deepwater program in 2007, the anticipated cost, schedules, and capabilities of many of the Deepwater assets have changed, in part due to the Coast Guard’s increased insight into what it is buying. The purpose of the 2007 baseline was to establish cost, schedule, and operational requirements for the Deepwater system as a whole; these were then allocated to the major assets. Coast Guard officials have stated that this baseline reflected not a traditional cost estimate but rather the anticipated contract costs as determined by ICGS. Furthermore, the Coast Guard lacked insight into how ICGS arrived at some of the costs for Deepwater assets. As the Coast Guard has assumed greater responsibility for management of the Deepwater Program, it has begun to improve its understanding of costs by establishing new baselines for individual assets based on its own cost estimates. These baselines begin at the asset level and are developed by Coast Guard project managers, validated by a separate office within the acquisition branch and, in most cases, are reviewed and approved by DHS. The estimates use common cost estimating procedures and assumptions, and may account for costs not previously captured. Beginning in September 2008 the Coast Guard began submitting new baselines to DHS. To date, 10 asset baselines have been submitted to DHS and 4 have been approved. These new baselines are formulated using various sources of information depending on the acquisition phase of the asset. For example, the baseline for the NSC was updated using the actual costs of material, labor, and other considerations already in effect at the shipyards. The baselines for other assets, like the MPA, were updated using independent cost estimates. As the Coast Guard approaches major milestones, such as the decision to enter low-rate initial production or begin system development, officials have stated that the cost estimates for all assets will be reassessed and revalidated. As the Coast Guard has developed its own cost baselines for Deepwater assets, it has become apparent that some of the assets it is procuring will likely cost more than anticipated. While the Coast Guard is still in the process of communicating the effect and origin of these cost issues to DHS, information available to date for assets shows that the total cost of the program will likely exceed $24.2 billion, with potential cost growth of approximately $2.1 billion through the life of the Deepwater Program. As more baselines are approved by DHS, further cost growth may become apparent. Table 2 provides the estimates of asset costs available as of April 2009. It does not reflect the roughly $3.6 billion in other Deepwater costs, such as program management, that the Coast Guard states do not require a new baseline. The effort by the Coast Guard to develop new baselines provides not only a better understanding of the costs of the Deepwater assets, but also insight into the drivers of any cost growth. For example, the new NSC baseline attributes a $1.3 billion rise in cost to a range of factors, from the additional costs to correct fatigue issues on the first three cutters to the rise in commodity and labor prices. The additional $517 million needed to procure all 36 MPA is attributed primarily to items that were not accounted for in the previous baseline, including a simulator to train aircrews, facility improvements, and adequate spare parts. By understanding the reasons for cost growth, the Coast Guard may be able to better anticipate and control costs in the future. The Coast Guard has structured some of the new baselines to show how cost growth could be controlled by making trade-offs in asset quantities and/or capabilities. For example, the new MPA baseline provides cost increments that show the acquisition may be able to remain within its initial allotment of the overall $24.2 billion if 8 fewer aircraft are acquired. Coast Guard officials have stated that other baselines currently under review by DHS present similar cost increments. This information, if combined with data from the fleet mix study to show the effect of quantity or capability reductions on the system-of-systems as a whole, offers a unique opportunity to the Coast Guard for serious discussions of trade- offs. The Coast Guard’s reevaluation of baselines has also changed its understanding of the delivery schedules and capabilities of Deepwater assets. According to the new baselines, a number of assets will be available for operational use later than originally anticipated. This includes a 12-month delay for the NSC to reach its initial operating capability and an 18-month delay for the MPA. Coast Guard officials stated that the restructuring of the unmanned aircraft and small boat projects has delayed the deployment of these assets with the NSC and affects the ship’s anticipated capabilities in the near term. We plan to report later this summer on the operational effect of the delays in the NSC project. While the Coast Guard plans to update its annual budget requests with asset-based cost information, the current structure of its budget submission could limit Congress’s understanding of details at the asset level. The budget submission presents total acquisition costs only at the overall Deepwater system level ($24.2 billion), and the description of funding for individual assets does not include key information such as costs beyond the current 5-year capital investment plan, i.e., life-cycle costs, or the total quantities of assets planned. For example, while the justification of the NSC request includes an account of the capabilities the asset is expected to provide, how these capabilities link to the Coast Guard’s missions, and details on what activities past appropriations have funded, it does not include estimates of total program cost, future award or delivery dates of remaining assets, or even the total number of assets to be procured. Our past work has emphasized that one of the keys to a successful capital acquisition, such as the multibillion-dollar ships and aircraft the Coast Guard is procuring, is budget submissions that clearly communicate needs. A key part of this communication is to provide decision makers with information about cost estimates, risks, and the scope of a planned project before committing substantial resources to it. Good budgeting also requires that the full costs of a project be considered upfront when decisions are made. Other agencies within the federal government that acquire systems similar to those of the Coast Guard capture these elements in justifications of their requests. To illustrate, table 3 provides a comparison of the information found in the NSC budget justification with that used by the Navy for its shipbuilding programs. While the Coast Guard does include some of this information in its asset- level Quarterly Acquisition Reports to Congress and the Deepwater Program Expenditure Report, these documents are provided only to the appropriations committees, and the information is restricted due to acquisition sensitive material. One reason the Coast Guard originally sought a systems integrator was because it recognized that it lacked the experience and depth in its workforce to manage the acquisition internally. Now that the Coast Guard has taken control of the Deepwater acquisition, it acknowledges that it faces challenges in hiring and retaining qualified acquisition personnel and that this situation poses a risk to the successful execution of its acquisition programs. According to human capital officials in the acquisition directorate, as of April 2009, the acquisition branch had funding for 855 military and civilian personnel and had filled 717 of these positions— leaving 16 percent unfilled. The Coast Guard has identified some of these unfilled positions as core to the acquisition workforce, such as contracting officers and specialists, program management support staff, and engineering and technical specialists. Even as it attempts to fill its current vacancies, the Coast Guard plans to increase the size of its acquisition workforce significantly by the end of fiscal year 2011. To supplement and enhance the use of its internal expertise, the Coast Guard has increased its use of third-party, independent experts outside of both the Coast Guard and existing Deepwater contractors. For example, a number of organizations within the Navy provided independent views and expertise on a wide range of issues, including testing and safety. In addition, the Coast Guard will use the American Bureau of Shipping, an independent organization that establishes and applies standards for the design and construction of ship and other marine equipment, as an advisor and independent reviewer on the design and construction of the Fast Response Cutter. The Coast Guard has also begun a relationship with a university-affiliated research center to augment its expertise as it executes its fleet mix analysis. In addition to third party experts, the Coast Guard has been increasing its use of support contractors. Currently, there are approximately 200 contractor employees in support of the acquisition directorate— representing 24 percent of its total acquisition workforce—a number that has steadily increased in recent years. These contractors are performing a variety of services—some of which support functions the Coast Guard has identified as core to the government acquisition workforce—including project management support, engineering, contract administration, and business analysis and management. While support contractors can provide a variety of essential services, their use must be carefully overseen to ensure that they do not perform inherently governmental roles. The Coast Guard acknowledges this risk and is monitoring its use of support contractors to properly identify the functions they perform, as well as developing a policy to define what is and what is not inherently governmental. While the Coast Guard may be hard-pressed to fill the government acquisition positions it has identified both now and in the future, it has made progress in identifying the broader challenges it faces and is working to mitigate them. The Coast Guard has updated two documents key to this effort, the Blueprint for Acquisition Reform, now in its third iteration, and the Acquisition Human Capital Strategic Plan, which is in its second iteration. Each document identifies challenges the Coast Guard faces in developing and managing its acquisition workforce and outlines initiatives and policies to meet these challenges. For example, the Acquisition Human Capital Strategic Plan lays out three overall challenges and outlines over a dozen strategies the Coast Guard is pursuing to address them in building and maintaining an acquisition workforce. The discussion of strategies includes status indicators and milestones to monitor progress, as well as supporting actions such as the formation of partnerships with the Defense Acquisition University and continually monitoring turnover in critical occupations. The Blueprint for Acquisition Reform supports many these initiatives and provides deadlines for their completion. In fact, the Coast Guard has already completed a number of initiatives including achieving and maintaining Level III program manager certifications, adopting a model to assess future workforce needs, incorporating requests for additional staff into the budget cycle, initiating tracking of workforce trends and metrics, expanding use of merit-based rewards and recognitions, and initiating training on interactions and relationships with contractors. In conclusion, I’d like to emphasize several key points as we continue to oversee the various Coast Guard initiatives discussed today. It is important to recognize that Coast Guard leadership has made significant progress in identifying and addressing the challenges in taking on the role of systems integrator for the Deepwater Program. The Coast Guard is continuing to build on this progress by starting to follow a disciplined program management approach that improves its knowledge of what is required to meet its goals. An important component of this approach is gaining realistic assessments of needed capabilities and associated costs to enable the Coast Guard and Congress to better execute decision making and oversight. The Coast Guard’s ability to build an adequate acquisition workforce is critical, and over time the right balance must be struck between numbers of government and contractor personnel. Until the Coast Guard gains a thorough understanding of what it is buying and how much it will cost, and is able to put in place the necessary workforce to manage the Deepwater Program, it will continue to face risks in carrying out this multibillion dollar acquisition. Mr. Chairman, this concludes my statement and I would be happy to respond to any questions the committee may have. For further information about this testimony, please contact John P. Hutton, Director, Acquisition and Sourcing Management, at (202) 512-4841, huttonj@gao.gov. Other individuals making key contributions to this testimony include Michele Mackin, Assistant Director; Greg Campbell; Carolynn Cavanaugh; J. Kristopher Keener; Angie Nichols-Friedman; and Sylvia Schatz. Coast Guard: Change in Course Improves Deepwater Management and Oversight, but Outcome Still Uncertain. GAO-08-745. Washington, D.C.: June 24, 2008. Coast Guard: Observations on Changes to Management and Oversight of the Deepwater Program. GAO-09-462T. Washington, D.C.: March 24, 2009. Status of Selected Assets of the Coast Guard’s Deepwater Program. GAO-08-270R. Washington, D.C.: March 11, 2008. Coast Guard: Deepwater Program Management Initiatives and Key Homeland Security Missions. GAO-08-531T. Washington, D.C.: March 5, 2008. Coast Guard: Status of Efforts to Improve Deepwater Program Management and Address Operational Challenges. GAO-07-575T. Washington, D.C.: March 8, 2007. Coast Guard: Status of Deepwater Fast Response Cutter Design Efforts. GAO-06-764. Washington, D.C.: June 23, 2006. Coast Guard: Changes to Deepwater Plan Appear Sound, and Program Management Has Improved, but Continued Monitoring Is Warranted. GAO-06-546. Washington, D.C.: April 28, 2006. Coast Guard: Progress Being Made on Addressing Deepwater Legacy Asset Condition Issues and Program Management, but Acquisition Challenges Remain. GAO-05-757. Washington, D.C.: July 22, 2005. Coast Guard: Preliminary Observations on the Condition of Deepwater Legacy Assets and Acquisition Management Challenges. GAO-05-651T. Washington, D.C.: June 21, 2005. Coast Guard: Deepwater Program Acquisition Schedule Update Needed. GAO-04-695. Washington, D.C.: June 14, 2004. Contract Management: Coast Guard’s Deepwater Program Needs Increased Attention to Management and Contractor Oversight. GAO-04-380. Washington, D.C.: March 9, 2004. Coast Guard: Actions Needed to Mitigate Deepwater Project Risks. GAO-01-659T. Washington, D.C.: May 3, 2001. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Deepwater Program is intended to recapitalize the Coast Guard's fleet and includes efforts to build or modernize five classes each of ships and aircraft, and procure other key capabilities. In 2002, the Coast Guard contracted with Integrated Coast Guard Systems (ICGS) to manage the acquisition as systems integrator. After the program experienced a series of failures, the Coast Guard announced in April 2007 that it would take over the lead role, with future work on individual assets to be potentially bid competitively outside of the existing contract. A program baseline of $24.2 billion was set as well. In June 2008, GAO reported on the new approach and concluded that while these steps were beneficial, continued oversight and improvement was necessary. The Coast Guard has taken actions to address the recommendations in that report. This testimony updates key issues from prior work: (1) Coast Guard program management at the overall Deepwater Program and asset levels; (2) how cost, schedules, and capabilities have changed from the 2007 baseline and how well costs are communicated to Congress; and (3) Coast Guard efforts to manage and build its acquisition workforce. GAO reviewed Coast Guard acquisition program baselines, human capital plans and other documents, and interviewed officials. For information not previously reported, GAO obtained Coast Guard views. The Coast Guard generally concurred with the findings. The Coast Guard has assumed the role of systems integrator for the overall Deepwater Program by reducing the scope of work on contract with ICGS and assigning these functions to Coast Guard stakeholders. As part of its systems integration responsibilities, the Coast Guard has undertaken a fundamental reassessment of the capabilities, number, and mix of assets it needs; according to an official, it expects to complete this analysis by the summer of 2009. At the individual Deepwater asset level, the Coast Guard has improved and begun to apply the disciplined management process found in its Major Systems Acquisition Manual, but did not meet its goal of complete adherence to this process for all Deepwater assets by the second quarter of fiscal year 2009. For example, key acquisition management activities--such as operational requirements documents and test plans--are not in place for assets with contracts recently awarded or in production, placing the Coast Guard at risk of cost overruns or schedule slips. Due in part to the Coast Guard's increased insight into what it is buying, the anticipated cost, schedules, and capabilities of many of the Deepwater assets have changed since the establishment of the $24.2 billion baseline in 2007. Coast Guard officials have stated that this baseline reflected not a traditional cost estimate but rather the anticipated contract costs as determined by ICGS. As the Coast Guard has developed its own cost baselines for some assets, it has become apparent that some of the assets it is procuring will likely cost more than anticipated. Information to date shows that the total cost of the program may grow by $2.1 billion. As more cost baselines are developed and approved, further cost growth may become apparent. In addition, while the Coast Guard plans to update its annual budget requests with asset-based cost information, the current structure of its budget submission to Congress does not include certain details at the asset level, such as estimates of total costs and total numbers to be procured. The Coast Guard's reevaluation of baselines has also changed its understanding of the delivery schedules and capabilities of Deepwater assets. One reason the Coast Guard sought a systems integrator from outside the Coast Guard was because it recognized that it lacked the experience and depth in workforce to manage the acquisition internally. The Coast Guard acknowledges that it still faces challenges in hiring and retaining qualified acquisition personnel and that this situation poses a risk to the successful execution of its acquisition programs. According to human capital officials in the acquisition directorate, as of April 2009, the acquisition branch had 16 percent of positions unfilled, including key jobs such as contracting officers and systems engineers. Even as it attempts to fill its current vacancies, the Coast Guard plans to increase the size of its acquisition workforce significantly by the end of fiscal year 2011. While the Coast Guard may be hard-pressed to fill these positions, it has made progress in identifying the broader challenges it faces and is working to mitigate them. In the meantime, the Coast Guard has been increasing its use of support contractors.
Section 9 of the Communications Act authorizes FCC to collect regulatory fees annually. These regulatory fees do not include application fees or revenue from spectrum auctions. The statute directs FCC to do the following: Assess and collect regulatory fees to recover the costs of FCC’s regulatory activities, defined by section 9 as consisting of its enforcement, policy and rulemaking, user information, and international activities—in the amount required in FCC’s appropriation acts. Derive these fees by determining the full-time equivalent (FTE)number of employees performing these regulatory activities in three named bureaus and other FCC offices—adjusted to take into account various factors that are reasonably related to the benefits to the fee payors, including factors determined by FCC to be in the public interest. (According to FCC officials, the three bureaus named in section 9—the Private Radio, Mass Media, and Common Carrier Bureaus—have since been reorganized and renamed as the Wireless Telecommunications Bureau, the Media Bureau, the Wireline Competition Bureau, and the International Bureau.) Make mandatory adjustments. FCC maintains and is required annually to revise a schedule of regulatory fees to reflect proportionate increases or decreases in the amount of the appropriation to be recovered as well as changes in the number of licensees or other units required to pay the fees assessed. Make permitted amendments as necessary. FCC is required to amend the schedule if FCC determines that the schedule must be amended to comply with the statute’s requirement that the fees be derived by determining FTEs (as outlined above), adjusted to take into account factors reasonably related to the benefits the fee payor receives from FCC regulation, among other things. In recent years, Congress has included language in FCC’s annual appropriation act setting specific percentages of the appropriation FCC is to offset with collected regulatory fees. This percentage has risen from 38 percent in 1994, when section 9 first went into effect, to over 99 percent starting in 2004, to 100 percent starting in 2009. In fiscal year 2011, FCC’s appropriation, and hence the total in regulatory fees it was to use as offsets, was about $336 million. According to FCC officials, this appropriation funded about 1,556 FTEs in FCC’s 11 offices and 7 bureaus. The 7 bureaus include the (1) Consumer and Governmental Affairs, (2) Enforcement, (3) International, (4) Media, (5) Public Safety and Homeland Security, (6) Wireless Telecommunications, and (7) Wireline Competition Bureaus. The five industry sectors in which FCC has typically grouped regulatory fee payors include: (1) wireline services, (2) wireless services, (3) cable services, (4) broadcast services, and (5) international services. At times, FCC has combined cable and broadcast into an industry sector it calls media—aligning the four industry sectors with four FCC bureaus— wireline with the Wireline Competition Bureau, wireless with the Wireless Telecommunications Bureau, media with the Media Bureau, and international with the International Bureau. As shown in Table 1, within most of these industry sectors are a number of fee categories. Each year, FCC sets a rate for each fee category that is used to calculate how much each company within that category owes in regulatory fees. FCC assesses this rate on various bases. For example, the rate for wireline telephone companies is set per revenue dollar (for those revenues subject to fees); the rate for wireless telephone companies and cable television operators is based on the number of subscribers; the rate for geostationary orbit space stations, including operators of direct- broadcast satellite television, is based on the number of satellites; and broadcast television and radio licensees pay a flat fee that is set based on market reach characteristics, such as the size of the market area or population served. Entities that provide services in more than one fee category—such as a company that offers wireline and wireless services— must pay regulatory fees for each fee category commensurate with the service provided. Each year, FCC issues a Notice of Proposed Rulemaking (NPRM) in which it proposes how it will assess fees by industry sector and fee category for that fiscal year. FCC receives comments on the NPRM and may make adjustments before issuing a Report and Order establishing assessment rates for each year’s regulatory fees. FCC also establishes a due date for payment. Entities that are late in paying their assessed fees are assessed an additional one-time 25 percent statutory penalty, and FCC will take no action on any applications or other requests for benefits from such an entity until its past due assessment is paid. According to FCC officials, while the timing of this process varies somewhat from year to year, the assessment is collected in time for FCC to process payment and forward it to the Department of Treasury by the end of the fiscal year on September 30. For example, in fiscal year 2011, the NPRM was issued on May 3, 2011, and comments were accepted until June 9, 2011. The Report and Order was released on July 22, 2011, and the assessed fees were due on September 16, 2011. From fiscal year 1998 through its most recent assessment for fiscal year 2011, FCC has based its division of regulatory fees among industry sectors and fee categories on its fiscal year 1998 division of FTEs among fee categories. FCC determined this fiscal year 1998 division of FTEs among fee categories through a cost-accounting system that FCC abandoned in fiscal year 1999 because of problems described in greater detail below. In subsequent years, FCC continued to use the same basic division of fees among fee categories established in fiscal year 1998, with some adjustments to the rates of certain fee categories, based on (for example) concerns about overburdening particular industries. These adjustments were not based on any FTE analysis and have had relatively minor effects on the division of regulatory fees by industry sector that FCC established in fiscal year 1998, as shown in figure 2. In fiscal year 1994, when FCC first implemented the Communications Act regulatory fee statute, FCC used the fee schedule Congress had included as a starting point in the statute. That schedule, which was developed based on information provided to Congress by FCC, set annual regulatory fees for 46 fee categories that FCC was to follow until FCC amended the schedule. The fee schedule established numerous fee categories—46— assessed on different bases, including a flat fee basis, a per subscriber basis, a per antenna basis, and others. While FCC has made changes to this fee schedule over the years, including adding and altering fee categories, the basic elements of its structure—established based on the telecommunications industry as it existed in 1994, and in the context of directing FCC to collect fees to cover 38 percent of its appropriation instead of the 100 percent that FCC has been directed to collect since fiscal year 2009—have continued to guide FCC’s regulatory fee assessment. The Communications Act requires FCC to develop accounting systems necessary for the agency to determine whether and how the fee schedule should be adjusted to comply with the statute’s requirement that FCC base its regulatory fees on the number of FTEs performing regulatory tasks, among other things. The act does not specify that the system should be a cost accounting system—FCC was free to interpret this requirement according to its perceived needs. Nevertheless, in its Reports and Orders for the 2 years following 1994, FCC discussed its plans to develop a cost-accounting system to guide its division of fees among fee categories. FCC implemented this cost-accounting system, which relied on employees’ coding of time and attendance report entries, for fiscal years 1997 and 1998, using it as the basis for dividing fees among fee categories. At the time, FCC stated that its purpose in using a cost-accounting system based on employees’ time card entries was to ensure that fee collections from each category of service approximated, to the extent possible, FCC’s actual costs to regulate each fee category. In the Matter of Assessment and Collection of Regulatory Fees for Fiscal Year 2004, 19 FCC Rcd. 11665. much fluctuation in fees from year to year—because of a combination of annual changes in workload, employee errors in completing time sheets, and various other factors. FCC found that over the 1997 to 1998 period, the rate assessed to all entities in a fee category could increase by more than 25 percent from the prior year—beyond any increase because of increases in the total amount in regulatory fees FCC was required to collect. FCC officials stated that these fluctuations were especially problematic for small service providers that could least absorb unpredictable increases in fees. According to FCC officials, FCC has continued to rely on the 1998 division of regulatory fees as the basis of its fee division through fiscal year 2011. It has done so in spite of the problems FCC identified with the system and even though this approach put FCC at risk of dividing the regulatory fee burden among entities in different industries based on obsolete data. FCC officials stated that while the statute requires FCC to amend its regulatory fees if FCC determines such amendment is necessary to comply with the FTE-based requirement, among other things, the statute does not prescribe a specific time at which FCC must make such a determination. Furthermore, according to FCC officials, while FCC has maintained information on how its FTEs are distributed among the four core bureaus—which generally track with the four industry sectors—FCC does not have information on how its current FTEs are divided among the fee categories in the current fee schedule. 47 U.S.C. §159(b)(3) states in pertinent part that “The Commission shall, by regulation, amend the Schedule of Regulatory Fees if the Commission determines that the Schedule requires amendment to comply with the requirements of paragraph (1)(A).” things, emphasizes the importance of regularly updating analyses to ensure that fees are set based on relevant information. The major changes that have occurred in the telecommunications industry over the past 14 years dramatically increase the likelihood that FCC’s current division of fees among fee categories has become obsolete. In 2008, FCC stated in a Further Notice of Proposed Rulemaking that major industry changes since 1994 included the significant increase of wireless, broadband, and voice over Internet protocol (“VoIP”), and discussed the fact that FCC itself had reorganized several times to reflect industry changes. FCC acknowledged that there could be several areas in which the regulatory fee process could be revised and improved to better reflect the current industry. Two former FCC commissioners told us that the significant increase in broadband and wireless services, the increasing convergence of telecommunications industries, and the transition to digital television are major changes that have occurred since fiscal year 1998 that have affected FCC’s workload and priorities. Changes in FCC’s estimates of subscribers, revenues, or other bases used to set the annual regulatory fee rates for different fee categories also indicate major changes in the balance of telecommunications industries from fiscal years 1998 to 2011. According to FCC’s estimates (see table 2), measures of some industries grew by over 50 percent— including the wireless telephone industry, for which the number of subscribers grew by over 400 percent—while measures of other industries declined by over 40 percent, including VHF television stations, for which the number of stations declined by 48 percent. In comparison to these dramatic shifts, relatively small changes in the percent of the total regulatory fees expected to be paid by these industries have occurred. For example, while the wireline telephone industry’s estimated revenues on which fees are assessed declined by 44 percent from fiscal year 1998 to fiscal year 2011, the percentage of total regulatory fees this industry is expected to pay declined by 4 percentage points, from 48 percent to 44 percent of total fees. And while the wireless telephone industry’s estimated number of subscribers grew 437 percent during this time period, the percentage of the total regulatory fees the cell phone industry is expected to pay grew only 5 percentage points—from 10 to 15 percent of the total regulatory fees. According to FCC officials, there is not always a straightforward relationship between growth in the number of subscribers, revenues, or other basis used to determine the fee rate of a fee category and the amount of work FCC performs related to that fee category, and thus these shifting numbers do not offer a clear guide as to how or even the extent to which the division of FCC’s regulatory fees among industry sectors should be realigned. Nevertheless, they reinforce the magnitude of the changes that have occurred, and underscore the likelihood that FCC’s division of fees among fee categories may no longer correlate to its current division of FTEs. (See table 2.) FCC’s Office of the Managing Director has published some information that further suggests that FCC is basing its division of regulatory fees among fee categories on data that do not correlate with industry trends and FCC’s current workload. In fiscal year 2008, FCC issued a Further Notice of Proposed Rulemaking (FNPRM) specifically to consider reforms to its regulatory fee process. In a separate public notice issued after FCC adopted the 2008 FNPRM, the Office of the Managing Director provided some updated information on FCC’s costs by core bureau. According to FCC officials, the core bureaus correlate to the four industry sectors of wireless telecommunications, wireline telecommunications, media, and international. This information demonstrated substantial misalignment between the division of regulatory fees by industry sector as presented in FCC’s fiscal year 2008 FNPRM and FCC’s costs by bureau in the Wireless, Wireline, and International Bureaus as presented in the public notice, as shown in figure 3—although FCC officials did not include any information at the more granular level of fee category. For example, in fiscal year 2008, the wireless industry paid about 17 percent of the regulatory fees while the Wireless Telecommunications Bureau incurred about 27 percent of FCC’s total costs. In contrast, the wireline industry paid about 47 percent of the total fees while the Wireline Competition Bureau incurred about 23 percent of FCC’s total costs. FCC did not comprehensively reform its process as a result of this FNPRM. FCC’s inaction in updating its FTE analysis is inconsistent with federal guidance on user fees. We recognize that federal guidance on user fees for the most part assumes that the fees are to be set based on a cost- recovery scheme, which differs from the Communications Act’s requirement that FCC base its regulatory fees on FTEs, among other things. FTEs—the basic measure of levels of employment used in the federal budget—are not the same as costs. FTE information is often readily available and can be a useful proxy for cost, but FTE information does not necessarily reflect total cost because, for example, it would neither distinguish between higher and lower cost FTEs, nor would it include other costs, such as contractors, training, equipment, or facilities’ costs. Nevertheless, many of the general principles of federal user fee guidance remain relevant in considering FCC’s FTE analysis. First, federal guidance emphasizes the importance of reviewing fees regularly to check the extent to which they are properly aligned. For example, OMB Circular No. A-25, which, among other things, provides guidance to agencies regarding their assessment of user charges under other statutes, directs agencies that have user fees to review the user fees biennially in order to assure, among other things, that existing charges are adjusted to reflect unanticipated changes in costs or market values. The fact that the Communications Act directs FCC to base its fees on FTEs does not negate the applicability of the guidance regarding the regularity with which the basis of the fees (i.e., FTEs) should be reviewed. The reason that regular review is part of the guidance is to assure that fees are adjusted to reflect changes that may have occurred over time in the agency’s distribution of work among fee categories—which could be measured by costs or FTEs. Second, according to federal financial-accounting standards, cost information should be reported in a timely manner and on a regular basis and should be reliable and useful in making decisions. This standard does not require the use of a particular type of costing system or methodology, stating that agency and program management is in the best position to select a type of costing system to meet its needs. However, the standard requires that a methodology, once adopted, be used consistently in order to provide results that can be compared from year to year—with improvements and refinements made as necessary. In FCC’s case, given the statutory framework of its regulatory fee program, this principle pertains to FTEs rather than costs. Given the problems FCC encountered with using its cost-accounting system to analyze FTEs by fee categories in fiscal year 1998, these standards would suggest that FCC could have considered alternate methodologies—or improvements to its cost-accounting system—to address the problems described. However, FCC’s decision to freeze its division of regulatory fees by fee category on fiscal year 1998 data that came from the cost-accounting system FCC abandoned, rather than addressing the problems or choosing a different methodology, is inconsistent with the goal of such standards. This decision, over time, has resulted in FCC not having FTE information that is timely, reliable, or comparable from year to year to guide its decisions on how to divide regulatory fees. In prior work, we have stated that agencies that do not review and adjust fees regularly run the risk of undercharging or overcharging users, raising equity concerns. Moreover, because FCC is directed in its annual appropriation acts to collect a certain amount of money in regulatory fees each year, if its division of fees among fee categories is misaligned with its FTEs by fee category, then some entities are most likely overpaying, essentially cross-subsidizing entities in other fee categories, which are underpaying. FCC’s regulatory fees are unlikely to ever equal the exact cost of regulating the corresponding fee category for several reasons. First, since FCC is required to collect 100 percent of its appropriation through regulatory fees, including funding for items that are not specifically regulatory activities—such as general overhead—the regulated industries are being assessed to pay for more than the number of FTEs required for their regulation. Second, FCC is directed by statute to base its fee assessment on FTEs, which may not represent actual regulatory costs. According to FCC officials, because it is not possible to precisely assign the costs of regulation on a service-by-service basis, and because the act requires FTE-based assessment and does not require amending the fee schedule to mirror all changes in regulatory costs, some regulated entities pay more than the direct cost of their regulation. Third, exemptions create cross subsidization, as could some other policy decisions. FCC, as required by statute, has exempted some groups of entities, such as nonprofits, from paying fees, and has at times exercised its statutory discretion by reducing the fee rates of certain fee categories when it determined that doing so would benefit the public interest. In prior work, we have pointed out that while exemptions can promote one kind of equity by factoring the users’ ability to pay into the fee-rate formula, such provisions may also increase cross-subsidies among users. We have stated that in applying exemptions, agencies may purposefully choose to set fees in such a way that cross subsidization occurs in order to promote other policy goals. However, we have also stated that generally, fees should be aligned with the costs of the activities for which the fee is collected, unless there is a policy decision not to align them. Without a current FTE analysis by fee category, it is not possible to determine the extent that cross subsidization is occurring between fee categories, or which fee categories are cross subsidizing other fee categories. However, any cross subsidization that is occurring not because of a decision to promote a policy goal but because the FTE analysis on which FCC bases its fees is obsolete, is not consistent with general user fee principles. According to officials in many industry associations and companies we spoke with in the wireline, wireless, cable, and international industry sectors, FCC’s regulatory fees are typically passed along to the consumer, either in a line item on the bill or bundled into the general cost of service. One potential effect of cross subsidization, therefore, is that, if entities in different fee categories are directly competing for the same customers, cross subsidization could result in competitively disadvantaging entities in one fee category over another. As discussed in the next section, some stakeholders told us that the regulatory fees are small enough that they do not have a significant financial impact on the companies that pay the fees. However, several industry stakeholders in the wireline and cable television industry sectors told us that FCC’s current regulatory fee process is competitively disadvantaging certain industries and that FCC’s use of multiple bases for setting fee rates makes it more difficult for industry stakeholders to compare the rates assessed to different fee categories—and thus more difficult to determine the extent to which the fees are fair and equitable. These views were echoed in formal comments to FCC’s regulatory fee FNPRM in 2008, when FCC last requested comments on substantial reform to its regulatory fee process. For example, in response to the 2008 FNPRM, the National Cable and Telecommunications Association (NCTA), a trade association for the U.S. cable industry, argued that FCC assesses higher regulatory fees on cable operators than it does on direct broadcast satellite television operators. According to the cable association, the direct broadcast satellite television industry is a direct competitor to cable, and thus its lower regulatory fee burden could give it a competitive advantage. The cable association argued that every type of multichannel video-programming distributor, including cable, telephone, and direct broadcast satellite providers of multichannel video service, should pay the same regulatory fee rate in order to ensure that no entity received the competitive benefit of lower fees based solely on the technology it used. Moreover, the cable association’s staff told us that because the cable television industry’s fee rate is set on a per-subscriber basis and the direct broadcast satellite television operator industry’s fee rate is set on a per-satellite basis, it was not possible to compare the fees as stated in FCC’s published information in order to assess their fairness. For the cable association to determine how its members’ fees compared to the fees of direct broadcast satellite television operators on a per- subscriber basis, the association had to do its own analysis using company data. In its 2008 comments to the FNPRM, the cable association also suggested that all providers of voice service and multichannel video programming distributors—including cable, telephone, and direct broadcast satellite providers—should pay on a per-subscriber basis instead of the three different bases—per revenue dollar, per subscriber, and per satellite—used today. In another example, the Independent Telephone and Telecommunications Alliance (ITTA), which represents a number of mid- size wireline telephone companies, argued that under FCC’s regulatory fee process, wireline companies had higher per-subscriber fees than wireless companies. ITTA argued that this higher per-subscriber rate was not justified because, due to the convergence among technologies since 1994, many of FCC’s expenditures related to telecommunications issues now related equally to wireline and wireless providers. According to ITTA, the effect of the different fee rates assessed to wireline and wireless telephone providers was that providers of similar voice services—and their customers—assumed dissimilar responsibility in bearing FCC’s regulatory costs. ITTA called for both wireline and wireless providers’ regulatory fees to be assessed on the basis of revenue, instead of the current situation, in which wireline companies pay fees based on revenue while wireless companies pay fees based on subscribership. Interestingly, in fiscal year 1994, FCC assessed the fees of both wireline and wireless telephone entities on the basis of subscribers, as put forth in the fee schedule in the Communications Act. For fiscal year 1995, FCC amended the schedule by, among other things, changing its basis for assessing regulatory fees on the wireline telephone industry from a subscriber to a revenue basis. In making this change, FCC stated in the Report and Order that a revenue-based methodology would equitably distribute the fee requirement in a competitively neutral manner, and that it was FCC’s intention to consider changing wireless carriers’ fees to a revenue basis in future years. However, FCC has not done so, although wireless providers report the same revenue information to FCC that wireline providers do. In addition, one commenter to a recent NPRM suggested that FCC use revenue as the basis for assessing regulatory fees on media fee categories. According to FCC officials, because FCC does not currently require industries in the media fee categories to report any revenue information to FCC, in order for FCC to assess media companies on the basis of revenue, FCC would have to rely on the honor system in determining entities’ fee obligations, or establish new reporting requirements, which would be burdensome to FCC and industry. FCC did not summarize or comment on the proposals submitted by the cable association and ITTA to the fiscal year 2008 FNPRM, even though ITTA re-submitted its proposal in response to the fiscal year 2009 NPRM. Instead, FCC exercised its administrative discretion to resolve all the outstanding matters stemming from the FNPRM at a later time in a separate Report and Order. More than 3 years later, no separate Report and Order has been issued addressing these industry associations’ comments. According to NCTA and ITTA officials, the associations stopped submitting formal comments to FCC because FCC’s lack of responsiveness discouraged them from doing so—but both associations continue to see the current regulatory fee assessment as not based on any valid FTE analysis and as causing competitive disadvantage to their industry. Most companies we spoke with stated that FCC’s regulatory fees have little to no direct financial impact on the company, given the relatively small size of the fees—for example, wireline telephone companies were to pay $.00375 per assessable revenue dollar in fiscal year 2011, while wireless telephone companies were to pay $0.17 per subscriber. However, officials at the National Association of Broadcasting stated that the payment of regulatory fees is a bigger issue for small stations. These officials stated that because consumers do not pay directly for broadcast radio or television, broadcasting entities cannot pass regulatory fees on to consumers but must incorporate the fee payment into operating costs to be paid with general operating revenue. The National Association of Broadcasters and one broadcast company we spoke with stated that at a time when some broadcasting companies are laying off employees because of financial difficulties, FCC’s regulatory fees may equal the cost of one or more employees that the company could not afford to keep because of the regulatory fees. This potential impact on companies underscores the importance that FCC assess regulatory fees on a fair and equitable basis—and that it have updated information on FTEs with which to do so. The effect of regulatory fees on consumers is difficult to assess, in part because of the relatively low cost of the fees. For example, if a wireless telephone company passed its fiscal year 2011 regulatory fee directly on to consumers, the fee would have increased the bill of each consumer by $0.17 for the year. On the other hand, representatives of a wireline telephone company we spoke with stated that many of their customers are rural, low income, elderly people who are affected by any increase in their phone bill caused by regulatory fees. According to FCC officials, the agency has not revised its assessment of fees among fee categories since fiscal year 1998 in part because it is difficult to propose and implement reforms given its need to collect regulatory fees by the end of each fiscal year. In addition, FCC officials stated that because the agency had received only a limited number of comments to its 2008 FNPRM, FCC had decided not to undertake major reform at that time. However, as described above, federal guidance on user fees recommends that agencies review their fees biennially— including the costs that the fees are reimbursing. Moreover, by not periodically analyzing FTEs by fee category and adjusting its division of regulatory fees based on this analysis, FCC may have put itself into a situation where, in order to adjust regulatory fees based on an updated FTE analysis, FCC may have to figure out how to handle large swings in fees for some fee categories. For example, we found that when another agency waited 9 years before performing a review of its cost-based fees, the result was that the average fee increased by 86 percent, causing the new fee schedule to be widely questioned. GAO, FCC Management: Improvements Needed in Communication, Decision-Making Processes, and Workforce Planning, GAO-10-79 (Washington, D.C., Dec. 17, 2009). among the fee categories or what the outcome of such an analysis would be. In the fiscal year 2012 regulatory fee NPRM, released on May 4, 2012, FCC stated that it planned to undertake two separate NPRMs to consider reforms to the regulatory fee process. FCC stated that it would issue a Report and Order finalizing its decision on all issues raised in the reform proceedings, including new cost allocations and revised regulatory fees, in sufficient time to allow for their implementation in fiscal year 2013. On July 17, 2012, FCC released an NPRM on regulatory fee reform. As discussed in our agency comments section, this NPRM proposes some fundamental changes to FCC’s regulatory fee program that relate to many of the concerns raised in this report. FCC has not been transparent in describing its regulatory fee process in its recent annual NPRMs and Reports and Orders. This lack of transparency has resulted in uncertainty among some industry associations about FCC’s regulatory fee process; some told us that the lack of transparency has made it more difficult for them to comment or provide input on FCC’s regulatory fee process. In prior work, we have reported that the regulatory process is used to provide information on fees to Congress and stakeholders and to solicit stakeholder input. Therefore, we have reported that, when an agency has authority to adjust a fee through the regulatory process, as a first step towards improved transparency, it should make available to the public substantive information about recent and projected program costs and fee collections through its notices in the Federal Register. Relevant information includes the agency’s new fee rates, descriptions of the costs of the program, projected program costs and fee collections, and the assumptions the agency used to make those projections. FCC’s recent annual Reports and Orders on regulatory fees include FCC’s fee rates, along with the total FCC is required to collect as directed in its appropriations act and how much it expects to collect from each fee category. However, since FCC has not performed any current FTE analysis, there is no discussion of FCC’s current FTEs or costs related to each fee category. Moreover, FCC does not clearly explain in any of the Reports and Orders after fiscal year 2002 that the division of regulatory fees among fee categories is based on a fiscal year 1998 FTE analysis that was never updated. This lack of information in FCC’s regulatory-fee- related NPRMs and Reports and Orders has limited the ability of industry stakeholders to understand exactly how FCC has been determining its assessment of regulatory fees in recent years, and may have limited stakeholders’ ability to effectively provide input to this process. Another area where FCC has not been transparent is in describing the effects of its adjustments on other fee payors. Each year, FCC’s regulatory-fee-related NPRMs and Reports and Orders include any proposed or actual adjustments and tables detailing the resulting regulatory fees for all payors. However, those tables have not explicitly shown how adjustments to the rates of certain fee categories have affected the rates of the other fee categories, or the total FCC must attempt to collect from other fee categories. Consequently, it is difficult to use FCC’s information to determine how FCC got from the previous year’s regulatory fee rates to the current year’s regulatory fee rates. For example, in the fiscal year 2010 Report and Order, FCC stated that because the revenue base upon which the wireline telephone industry’s fee rate is calculated had been decreasing for several years, FCC had determined it would best serve the public interest to set the wireline telephone industry’s fiscal year 2010 fee rate at $0.00349 per revenue dollar. In a footnote, FCC elaborated that because the wireline telephone industry’s revenue data was lower than expected, if FCC had not decided to set the wireline telephone rate at $0.00349 per revenue dollar, the rate would have increased to $0.00364 per revenue dollar. However, FCC did not explain what this change in rates translated to in terms of the amount of revenue it expected to collect in fees from the wireline telephone industry. Moreover, while FCC stated in the Report and Order that reducing the fees paid by the wireline telephone industry would increase the fees paid by licensees in other service categories, and the resulting regulatory fees are detailed in FCC’s Report and Order, FCC did not specifically show the fee increase for each regulatory fee category caused solely by this policy decision. In November 2011, FCC officials told us that this policy decision had resulted in reducing the total expected fees to be collected from the wireline telephone industry by approximately $12 million, and that FCC instead attempted to collect this $12 million by raising the rates of all the other fee categories based on the existing division of fees among fee categories. This $12 million is reflected in the regulatory fee tables set forth in FCC’s Order. However, the limited information on how various adjustments affect each fee category reduces the ease with which industry stakeholders or other interested parties can understand the effects of FCC’s current process—including the policy decisions FCC has made without any updated FTE analysis. On average, FCC collected 2 percent more each year in regulatory fees than it was required to collect in its annual appropriations acts over the past 10 fiscal years. FCC under collected regulatory fees in 1 year— 2003—and over collected regulatory fees in 9 years. For example, it overcollected regulatory fees by 5 percent—$13 million—in fiscal year 2005. (See table 3.) According to FCC officials, FCC attempts to meet its regulatory fee target each year but is unable to ensure it will collect exactly the amount required by Congress because there are multiple variables that can affect the final amount collected. Key variables that can cause FCC to collect more or less than it expected are late payments, FCC’s use of preliminary data in setting fee rates, refunds, and bankruptcies. Regarding late payments, FCC counts all regulatory fee payments that arrive in a fiscal year as part of that year’s regulatory fee collections, even if the imposed assessment was incurred in a prior year. FCC officials stated that each year some entities do not pay the fees owed that year, while some entities pay fees owed from prior years. According to FCC officials, because in any given year, FCC does not know exactly how much of the year’s owed fees are not going to be paid in the year they are due, or how much in late payments will come in from prior years, late payments can affect the total amount of regulatory fees collected for the year. We found that the percentage of FCC’s total annual regulatory fee collections that was made up of late payments varied from 1 to 3 percent for fiscal years 2005 to 2011. FCC’s use of preliminary data to set fees also can cause FCC to collect more or less than it expected and can at times lead to FCC’s having to refund companies some of their prior year’s fees, which can also affect the total collected. In order to charge fees based on current year data and to publish the final fee rates in the Report and Order in time for entities to pay by the end of the fiscal year, FCC must set the fee rate for some large fee categories—including wireline telephones, wireless telephones, and cable, among others—based on preliminary industry information. For example, until fiscal year 2011, FCC relied on preliminary estimates provided to FCC by wireline telephone entities to estimate the total amount of revenue dollars in the wireline telephone industry. In combination with FCC projections based on past years’ collections and economic conditions, FCC set the wireline telephone fee rate based on this preliminary industry data. Wireline telephone entities determine the amount of fees they owe by multiplying the fee rate as published in FCC’s annual Report and Order by their final revenue dollars, as reported by the entities typically after FCC had already set the rate for the fiscal year. If, in aggregate, the total final amount of revenue dollars in the industry was significantly higher or lower than the estimate FCC used to set the fee rate, FCC would collect more or less than it expected. In fiscal year 2011, FCC automated the input of annual revenue data provided by wireline providers to FCC so FCC would have actual instead of estimated revenue information to use in setting regulatory fees for wireline telephone companies. According to FCC officials, this change should improve FCC’s ability to predict how much total revenue wireline telephone entities will pay fees on, and therefore improve the accuracy of the rate it sets for the wireline telephone fee category in terms of meeting its target collection amount from that fee category. However, even so, wireline telephone entities can revise their final revenue numbers for an entire year after the revenue information has been submitted. According to FCC officials, if some wireline telephone entities pay their regulatory fees based on the revenue information submitted in one fiscal year, but then revise their revenue numbers downward after the end of the fiscal year, the filer may be entitled to a refund in the following year, which also can affect FCC’s ability to collect exactly the targeted amount in the next fiscal year. According to FCC officials, refunds can be sought on other grounds, too, and such filings cannot be predicted by FCC. In addition, according to FCC officials, FCC is an unsecured creditor when it comes to a licensee filing for bankruptcy and FCC often does not receive unpaid assessments from the bankruptcy court. Therefore, bankruptcies can also affect FCC’s ability to collect its target amount. Any regulatory fees collected above what FCC was directed to collect in its annual appropriations are considered excess fees. As explained earlier, since 2008, FCC’s annual appropriations have prohibited the use of any excess fees from the current year or previous years without an appropriation by Congress. Prior to fiscal year 2008, FCC’s annual appropriations stated that any excess regulatory fees remained available until expended. According to FCC officials, FCC obligated excess regulatory fees in fiscal years 1996 to 1998 to fund programs to help FCC with changes related to the year 2000 technology transition (sometimes referred to as Y2K), and it obligated excess regulatory fees from 2001 to 2003 in order to meet critical physical security needs in fiscal year 2004. According to FCC officials, FCC has deposited all excess fee collections into a separate account with the Department of Treasury. As of fiscal year 2011, the account held approximately $66 million, which represents about 2 percent of the $2.9 billion FCC was required to collect in regulatory fees from fiscal year 2002 to 2010. FCC has collected on average $6.7 million in excess fees annually from fiscal year 2006 to 2011, and so the account has steadily increased. FCC’s tendency to over collect rather than under collect regulatory fees over the past 10 years also suggests that as long as Congress does not provide for their disposition, total excess funds will continue to increase. Congress has not provided for the disposition of the funds. According to FCC officials, FCC has reported to Congress and the Department of Treasury on its excess regulatory fees. However, FCC has not been fully transparent with regard to informing industry stakeholders or others about these excess fees. FCC officials stated that FCC has kept Congress informed of the excess fees during periodic briefings with appropriators, and FCC provides an annual report to Treasury that identifies the total amount of regulatory fees it has collected for the past year, including the extent to which its collections vary from the amount FCC is required to collect. FCC also published the amount of excess fees collected in its fiscal year 2011 Annual Financial Report and its fiscal year 2013 budget estimate to Congress. However, FCC has not published the amount of excess fees collected in its NPRMs or Reports and Orders. In prior work, we have reported that the regulatory process is used to provide information on fees to stakeholders and to solicit stakeholder input. Therefore, when an agency has authority to adjust a fee through the regulatory process, it should make substantive information about recent and projected fee collections, among other things, available to the public through notices in the Federal Register. FCC has included projected fee collections for the current fiscal year in its NPRMs and Reports and Orders, but it has not disclosed the actual amount collected the prior year or disclosed any information on the total in excess fees collected in previous years. As a result, some industry associations we spoke with were aware that FCC had collected excess regulatory fees, but most did not know that the amount of FCC’s excess collections had grown to about $66 million. We identified alternative approaches that could be instructive as FCC considers reforms to its regulatory fee process. These alternative approaches include (1) ensuring that the division of fees among fee categories is aligned with a reasonably current measure of the division of regulatory activities among fee categories, and (2) taking specific steps to promote transparency in the regulatory fee process. In addition, we identified how these agencies are applying any excess fees. We identified these alternative approaches through examining the regulatory fee processes of five other regulatory fee-funded agencies in the U.S. and Canada: the Nuclear Regulatory Commission (NRC), Federal Energy Regulatory Commission (FERC), Farm Credit Administration (FCA), Canadian Radio-television and Telecommunications Commission (CRTC), and the Canadian Nuclear Safety Commission (CNSC). Because these agencies perform regulatory functions and recover many, if not all, of their costs through annual fees paid by regulated entities, we believe their processes may be instructive to FCC and Congress in considering reforms to FCC’s current regulatory fee process. In addition, while four of the agencies regulate different industries, CRTC regulates some of the same industries as FCC, including, according to CRTC officials, the telecommunications industry— which encompasses wireline and wireless telephone providers—and the broadcast industry—which encompasses radio, television, and cable distribution operators. Each of the five agencies, like FCC, has different, specific statutory authority authorizing its collection of annual regulatory fees to help fund the agency or to reimburse the Department of Treasury for its annual appropriation. FERC, for example, which has regulatory authority over the hydropower, oil pipeline, natural gas, and electric industries, derives its fee-collecting authorities from the Federal Power Act for the hydropower industry and the Omnibus Budget Reconciliation Act of 1986 for the oil, natural gas and electricity industries. Nevertheless, we believe approaches used by these agencies may be instructive for FCC as it considers reforms to its regulatory fee process. For more information on the criteria used to select these agencies, see appendix I. As we described previously, FCC has acknowledged the need to revisit its division of fees among fee categories to reflect regulatory and staffing changes that have occurred since 1998. However, it has not yet done so. We found that NRC, CRTC, and FERC divide fees among fee categories based on current or recent data by industry sector. The other two agencies we met with either have only one fee category (FCA) or do not collect most fees through a rate assessed to a category of fee payors (CNSC). According to officials at NRC, CRTC, and FERC, each agency aligns its assessment of annual fees by industry sector with an annually or biennially updated analysis of costs by industry sector. Officials at NRC specifically stated that keeping the agency’s fees aligned with annually or biennially updated costs was essential to ensuring that the fees were fair and equitable. If one industry sector gets more in services or regulatory activities from NRC in one year compared to the previous year, then that sector will pay a higher proportion of the total regulatory fees. NRC officials stated that they consider it part of NRC’s mission as a regulatory agency to ensure that the link between costs and fees is apparent, and officials at both NRC and CRTC told us that it is important that the regulated industries understand the rationale for the assessed fees. As stated previously, the Communications Act identifies FTEs as FCC’s basis for deriving regulatory fees. Nevertheless, the methods these three agencies use to keep their alignment of costs and fees updated may be instructive to FCC. According to NRC officials, NRC updates its cost analysis for its larger fee categories annually and its smaller fee categories biennially. The officials added that NRC’s regulatory fees are based on the proportional cost of direct and indirect services provided to an industry sector, as determined by NRC’s program offices, compared to the total fee-funded budget—and there is a direct link between the resources planned in the budget and the distribution of regulatory fees. For example, NRC officials stated that because the nuclear reactor category accounted for approximately 88 percent of the NRC fee-funded budget in fiscal year 2010, the nuclear reactor category was responsible for approximately 88 percent of the fees collected for fiscal year 2010. NRC officials told us that because they analyze costs for NRC’s larger fee categories annually and revise their division of fees accordingly by industry sector, at times an industry sector’s proportion of fees has risen or fallen compared to the previous year. However, NRC officials stated that the industries they regulate are generally aware of what work NRC plans to do related to each industry sector—in part because NRC informs industry of its plans during its budget process. CRTC also links the division of its fees by fee category to its costs for regulating each fee category, and CRTC updates its cost analysis and its fee assessment annually. One element of CRTC’s process that may be instructive to FCC in considering reforms is that while according to CRTC officials, CRTC regulates many of the same converging industries in Canada that FCC regulates in the United States, CRTC has only two fee categories for assessing regulatory fees: telecommunications and broadcast. Like FCC, CRTC regulates wireline telephone, wireless telephone, direct broadcast satellite and cable television operators, broadcast television, and radio. However, CRTC has one broadcast fee category that includes radio stations, television stations, and cable and direct broadcast satellite television operators. All pay the same rate on the same basis—the licensee’s fee revenues for the most recently completed year. In contrast, FCC has 62 fee categories for the same broadcasting services, and different bases for different fee categories, including, among others, a flat fee for each fee category of broadcast television and radio station, a per-subscriber fee rate for cable television, and a per-satellite fee rate for direct broadcasting satellite television operators. In another example, CRTC’s telecommunications fee category encompasses wireless telephone services and wireline telephone services. The rate for the telecommunications fee category is set on the same basis used to set the rate for the broadcast industry—the licensee’s fee revenues for the most recently completed year. In contrast, FCC has separate fee categories for wireless telephone services and wireline telephone services—and the two fee categories pay different rates set on different bases, with the wireless telephone rate set on a per-subscriber basis and the wireline telephone rate set on a per-revenue-dollar basis. CRTC officials told us that having two fee categories—both with fee rates determined on the basis of revenue—makes it relatively easy for CRTC to align costs to a fee category, even given the increasing convergence of industry and the cross-cutting nature of CRTC’s work. CRTC officials told us they track CRTC’s direct costs according to these fee categories in CRTC’s activity-based cost system annually. Because most mission- related staff are assigned to work centers aligned with either the broadcasting or the telecommunications industries, CRTC officials said it is administratively easy to track costs according to these fee categories. For staff working on cross-cutting issues related to both categories, management estimates how much time each staff has spent on each of the two fee categories. CRTC then divides the total amount in fees it must collect between the two fee categories based on its costs associated with each fee category. Indirect costs for internal services provided to the entire agency—such as, among other things, human resources, legal services, and accounting—are divided among the two fee categories consistent with the distribution of direct costs. FERC also tracks its costs by industry sector and fee category annually and then assesses fees in alignment with its costs. FERC officials told us that FERC’s time and attendance system tracks the time staff spends directly on each fee category through activity codes aligned with particular fee categories. This assessment of time spent on each industry forms the basis of the assessment of fees. Similar to CRTC, indirect costs are assessed among the fee categories based on the assessment of direct costs incurred by industry sector. NRC takes specific steps that facilitate industry and public understanding of how the agency distributes and assesses regulatory fees that go beyond FCC’s provision of information on this topic. NRC officials stated that NRC’s chief financial officer has consistently emphasized the importance of transparency in setting fees. According to NRC officials, transparency is important because the fees impact NRC’s stakeholders, and therefore stakeholders should be able to understand how the fees are derived. While both FCC and NRC publish NPRMs and Final Orders regarding each year’s fees, NRC also publishes the workpapers it has used to determine the fees and rates in its NPRMs and Final Orders to further promote transparency. These workpapers contain detailed cost data that form NRC’s basis for setting its fees for each industry sector. NRC’s website has a link to an electronic docket that contains its regulatory-fee-related NPRM, Final Order, and workpapers, such that one can see how NRC went from its detailed cost data to its final fee-setting rule. As described previously, in recent years, FCC has not included this level of detail in its NPRMs and Reports and Orders related to its regulatory fees. Moreover, in addition to providing these supporting workpapers on its website, NRC staff told us they also meet with industry stakeholders periodically to help ensure the stakeholders understand the assessment process and how the fee rates are determined. As mentioned earlier, FCC may not obligate any excess fees it receives without an appropriation from Congress. In contrast, officials at all five agencies we met with told us their agency has a form of annual adjustment or “true-up” mechanism such that any excess fees collected are either applied as an adjustment to the next year’s fees or are refunded. Four of the five agencies apply any excess fees collected toward the next year’s fee assessment, while one agency issues a refund. For example, according to NRC’s fiscal year 2011 Annual Financial Report, NRC applies collections that exceed its budget authority to offset subsequent years’ appropriations. According to FERC officials, at year- end, FERC calculates a required subsequent year adjustment based on the difference between the amounts assessed and actual costs. CRTC officials told us that they make an adjustment to the subsequent year’s assessments based on the difference between the fees collected—based on estimated costs—and annual expenditures. FCA officials stated that FCA also makes adjustments for overpayments in the current year to fees owed the following year. Lastly, CNSC officials told us they refund fees collected in excess of actual costs. As a result of these procedures, the fees paid to these five agencies are ultimately used to fund the regulatory agency or are refunded. The Communications Act states that FCC is to derive regulatory fees from the number of FTEs in certain bureaus performing regulatory activities, but the act does not specifically state how frequently FCC must reexamine its FTEs to assure its regulatory fees are aligned with FCC’s current work priorities. FCC has relied on this lack of clarity to justify continuing to use 1998 data as the basis for its assessment of regulatory fees—in spite of the vast changes to the telecommunications industry that have occurred, including significant convergence of technologies and changes in the nature of the industries that FCC regulates. Federal user fee guidance, accounting standards, and the practices of other agencies we met with all stress the importance of using timely, regularly updated data to guide decisions, with federal user fee guidance directing agencies to review user fees biennially to assure that charges are adjusted to reflect changes that have occurred. In addition, although FCC has made incremental changes to the fee schedule first established in the Communications Act and implemented by FCC in fiscal year 1994, FCC has not considered more holistic changes to the way regulatory fees are assessed. In part, FCC’s difficulties in keeping its process current may be because its statutory framework is based on a telecommunications environment that no longer exists. The large number of fee categories— 86 in fiscal year 2011—may have contributed to FCC’s difficulties in keeping the division of fees aligned with the current regulatory activities on which it spends its time. Furthermore, FCC’s lack of transparency in disclosing its methodology for dividing regulatory fees among fee categories and the different methodologies FCC uses to calculate fee rates for different industries have made it difficult for stakeholders to understand and comment on FCC’s decisions related to its regulatory fee process. On July 17, 2012, FCC released an NPRM on regulatory fee reform, which, as described in our agency comments section, contains proposals that respond to many of the concerns raised in this report. The processes of other regulatory fee-funded agencies, both in the United States and internationally, may be instructive for FCC as it considers such issues as re-aligning its division of regulatory fees and increasing the transparency of the process. We acknowledge the inherent difficulties in reforming the process. Because of the zero-sum nature of FCC’s regulatory fees, any significant changes to FCC’s assessment of regulatory fees among industry sectors and fee categories would most likely result in fee increases for some sectors and fee decreases for other sectors. Not only is this likely to be controversial to some industry stakeholders, but this change—and any analysis required to better align regulatory fees to FCC’s division of FTEs by fee category—is likely to be time consuming and require some FCC resources, if done comprehensively. Some potential changes, such as changes to the bases on which FCC assesses regulatory fees—could add new administrative burdens on FCC or industry stakeholders. The likely effects of changes to its current fee assessment will need to be carefully analyzed by FCC. In releasing the regulatory fee reform NPRM, FCC has taken an important first step in this challenging reform effort, but significant analysis and decisions remain to be made by FCC. Lastly, over time, FCC has collected approximately 2 percent more on average than is required in its annual appropriations acts. Because recent annual appropriations do not permit FCC to use any of these excess fees without congressional action, they currently have grown to $66 million and, absent any change in FCC’s statutory authority and method of collecting fees, are likely to continue to increase. The decision of how to dispose of these excess regulatory fees as well as how to handle any future excess collections is a policy choice for Congress to make. Congress should consider whether FCC’s excess fees (approximately $66 million through fiscal year 2011) should be appropriated for FCC’s use, or, if not, what the disposition of these funds should be, and whether to change FCC’s annual appropriations language to permit reconciliation of excess collections or to govern FCC’s handling of any future excess collections. We recommend that the Chairman of the FCC, as part of FCC’s effort to reform its regulatory fee process, take the following three actions: Determine whether and how the current fee schedule should be revised—including an overall analysis of the appropriate number of categories and bases for calculating rates—to reflect the current telecommunications industry and FCC’s regulatory activities, and in consideration of the processes of other regulatory fee-funded agencies that may be instructive, including, if appropriate, proposing to Congress any needed changes to its current statutory authority. Perform an updated FCC FTE analysis by fee category and establish a process to assure that the FTE analysis be performed at least biennially, consistent with federal guidance on user fees. Increase the transparency of FCC’s regulatory fee process by describing in each future year’s NPRM and subsequent report, in sufficient detail for stakeholders to understand, the methodology and analysis used to divide fees among fee categories, including the year any FTE data used was collected, any additional information needed to explain the effect of other adjustments, and the amount of excess fees collected. FCC provided written comments on a draft of this report by letter dated July 17, 2012. These comments are summarized below and are reprinted in appendix II. FCC agreed with our recommendations and stated that an NPRM on regulatory fee reform, released on July 17, 2012, addressed them. FCC stated that the NPRM sets forth three goals to guide FCC in its reform initiative: fairness, administrability, and sustainability. FCC stated that to achieve these goals, the Commission has proposed a series of fundamental changes to its regulatory fee program that include, but are not limited to, proposals contained in our recommendations. For example, FCC stated that, consistent with our recommendations, the NPRM seeks comment on (1) using updated fiscal year 2012 FTE data to calculate regulatory fees, (2) whether reducing the number of regulatory fee categories would be advisable, and (3) whether the different bases on which regulatory fees are currently calculated should be reduced or made uniform among all services. FCC stated that, consistent with our recommendation to consider the processes of other regulatory fee-funded agencies, it would place a copy of our final report in the record of the rulemaking so that interested parties could comment on our recommendations and analyses. Regarding our recommendation that FCC review its division of FTEs at least biennially, FCC stated that its NPRM seeks comment on the frequency with which FCC should revisit its division of FTEs, such as annually. Furthermore, FCC stated that it would implement our recommendation to increase the transparency of its rulemaking process in its next annual regulatory fee proceeding, for fiscal year 2013. Finally, regarding our matter for congressional consideration related to excess fees, FCC stated that should Congress decide to examine these or any other issues regarding regulatory fees, FCC would provide any information Congress may request. We recognize that the proposals contained in FCC’s NPRM are responsive to our recommendations. In light of FCC’s lack of action after its 2008 FNPRM on regulatory fee reform, it remains critical that FCC continue to move forward on analyzing its proposals and determining how best to update its regulatory fee process. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this report. At that time, we will send copies to the Chairman of FCC and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. In response to your request to review FCC’s regulatory fee process, we examined (1) FCC’s process for assessing regulatory fees among industry sectors and the results of this process, (2) FCC’s regulatory fee collections over the past 10 years compared to the amount it was directed to collect by Congress, and (3) alternative approaches to assessing and collecting regulatory fees that could be instructive for FCC as it considers reforms to its process. In examining FCC’s regulatory fee process, we reviewed relevant federal statutes, federal appropriations acts, congressional reports and hearing transcripts, FCC documents, and GAO reports. We spoke to stakeholders, including officials at FCC, industry trade associations, and fee-paying companies. Specifically, among others, we reviewed the following documents: Statute establishing FCC’s regulatory fee-collecting authority (Section 9 of the Communications Act of 1934) FCC’s appropriations acts, fiscal years 1994 to 2011 Conference Report to Accompany the Federal Communications Commission Authorization Act of 1991, Sept. 17, 1991 Hearing transcript, House Energy and Commerce Subcommittee on Communications and Technology Hearing on President Obama’s Fiscal 2013 Budget Proposal for the Federal Communications Commission, February 16, 2012 FCC Notices of Proposed Rulemakings, Further Notice of Proposed Rulemaking, and Reports and Orders related to FCC’s collection of regulatory fees, fiscal years 1994 through 2012 FCC budget justifications, fiscal years 2005 to 2013 FCC internal documentation of its regulatory fee methodology FCC internal documentation related to its core financial system, FCC strategic plans, 2009 to 2014 and 2012 to 2016 FCC annual financial reports, fiscal years 2010 and 2011 Prior GAO work on FCC, regulatory agencies, and user fees Federal guidance on user fees and cost accounting, including the Office of Management and Budget’s Circular No. A-25 and the Statement of Federal Financial Accounting Standards 4. We also spoke with stakeholders from the following entities: FCC—Office of the Managing Director, Enforcement Bureau, International Bureau, Media Bureau, Wireless Telecommunications Bureau, Wireline Competition Bureau Two former FCC commissioners Industry associations—American Association of Paging Carriers, CTIA-The Wireless Association, Independent Telephone & Telecommunications Alliance, National Association of Broadcasters, National Cable and Telecommunications Association, US Telecom Fee-paying Companies—Commonwealth Broadcasting, Critical Alert Systems, DIRECTV, Gannett Company Inc./Multimedia Holdings Corp., Intelsat, KRIS-TV, Level 3 Communications, Mainline Broadcasting, Midcontinent Media, People’s Telco, Quincy Newspapers (regarding its TV and radio interests), Southern Utah Telephone Company, Windstream Communications, and WUBU-FM To select the fee-paying companies (listed above) to interview about their perspectives on FCC’s regulatory fee process, we began with a list of companies provided by FCC. Our criteria for selecting companies from the FCC list were as follows: companies from each industry sector (wireless, wireline, broadcasting, cable, international); companies from a variety of fee codes within the industry sectors; and an emphasis on small companies, as they may be less well represented in associations, less likely to submit public comments to regulatory fee rulemakings, and regulatory fees may impact them more. Within each industry sector and fee category, we selected companies using these criteria and a few additional constraints. For example, if an FM radio station in a small market appeared to be owned by a company that also owned a station in a large market, then we treated it as large. Also, in most cases, companies were selected based on the fee categories in which they conducted their primary business, not on secondary business they might also have conducted. To understand FCC’s regulatory fee collections over the past 10 years compared to the amount it was directed to collect by Congress, we (1) met with officials to discuss FCC fee collection process and timeline and (2) analyzed FCC regulatory fee collection data from FCC’s internal financial system, Genesis, by FCC’s “payment type code” from fiscal year 2002 to fiscal year 2011. We assessed the reliability of the data through reviewing documentation on Genesis, and through interviews supplemented with questionnaires to knowledgeable agency officials on Genesis and related internal controls. We determined that the data were sufficiently reliable for determining FCC’s total regulatory fee collections, including by industry sector, for fiscal years 2002 through 2011, and for determining the amount of late payments in each of those years. We compared this fee collection data with the amount Congress appropriated to FCC for each respective year. FCC’s payment type codes are codes FCC assigns to identify the fee category for which a regulatory fee payment is associated with. FCC officials also provided us with a cross- reference that associated payment type codes with the main industry sectors used in our review (i.e., Broadcast, Cable, Wireline, Wireless, and International.) Subsequently, we analyzed the fee payment data by industry sector to understand the extent, if any, to which excess fees collected were associated with a particular industry sector and to analyze the influence of late payments on the total amount collected. We also spoke with a budgeting and forecasting expert, who provided background information and context related to FCC’s use of estimates and forecasts in setting regulatory fees. To identify alternative approaches to FCC’s regulatory fee process that could be instructive as FCC considers reforms to its current process, we reviewed the regulatory fee processes of several foreign and domestic federal agencies. In selecting comparative agencies, we narrowed our scope to those agencies that were similar enough to FCC in mission and fee process such that possibly instructive alternatives could be identified. FCC is an independent agency that regulates interstate and international communications by radio, television, wire, satellite and cable, and that assesses annual regulatory fees to offset its entire annual appropriation from Congress. We therefore selected independent regulatory commissions that recover the majority or all of their costs through annual fees assessed on regulated entities, including, in the U.S., the (1) Nuclear Regulatory Commission, (2) Federal Energy Regulatory Commission, and (3) The Farm Credit Administration. In order to include an agency that regulates industries that are similar to those regulated by FCC, we also included (4) the Canadian Radio-television and Telecommunications Commission (CRTC). Lastly, after receiving a recommendation from an official at CRTC, we included (5) the Canadian Nuclear Safety Commission, the Nuclear Regulatory Commission’s Canadian counterpart. We conducted this performance audit from May 2011 to August 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact above, Tammy Conquest (Assistant Director), Juan P. Avila, Russell Burnett, Patrick Dudley, Fred Evans, Colin Fallon, Bob Homan, Bert Japikse, Jacqueline M. Nowicki, Joshua Ormond, Steve Rabinowitz, and Alwynne Wilbur made key contributions to this report.
FCC must by law assess annual regulatory fees on telecommunications entities to recover its entire appropriation—about $336 million in fiscal year 2011. The entities from which FCC collects fees fall into one of five main industry sectors (broadcast, cable, wireline, wireless, and international) and are assigned to one of 86 fee categories, such as paging services. Recently, FCC stated that it was planning to consider reforms to its regulatory fee process. GAO was asked to examine (1) FCC’s process for assessing regulatory fees among industry sectors, (2) FCC’s regulatory fee collections over the past 10 years, and (3) alternative approaches to assessing regulatory fees. GAO reviewed FCC data and documents, interviewed officials from FCC and the telecommunications industry, and, to identify alternative approaches to assessing regulatory fees, met with five fee-funded U.S. and Canadian regulatory agencies. The Federal Communications Commission (FCC) assesses regulatory fees among industry sectors and fee categories based on obsolete data, with limited transparency. The Communications Act requires FCC to base its regulatory fees on the number of full-time equivalents (FTE) that perform regulatory tasks in certain bureaus, among other things. FCC based its fiscal year 2011 regulatory fee assessments on its fiscal year 1998 division of FTEs among fee categories. It has not updated the FTE analysis on which it bases its regulatory fees, in part to avoid fluctuations in fees from year to year. FCC officials stated that the agency has complied with its statutory authority by dividing fees among fee categories based on FTE data—although the data is from fiscal year 1998—since the statute does not prescribe a specific time for FCC to update its FTE analysis. As a result, after 13 years in a rapidly changing industry, FCC has not validated the extent to which its fees correlate to its workload. Major changes in the telecommunications industry include the increasing use of wireless and broadband services and a convergence of telecommunications industries. Moreover, FCC’s practice is inconsistent with federal guidance on user fees. As a result of FCC’s use of obsolete data in assessing regulatory fees, companies in some fee categories may be subsidizing companies in others. FCC officials said it has become more challenging to align current FTEs to the 86 fee categories given the increasingly cross-cutting nature of FCC’s work, raising the potential that FCC’s fee categories may also be out of date. FCC’s regulatory fee process also lacks transparency because of the limited nature of the information FCC has published on it. This has made it difficult for industry and other stakeholders to understand and provide input on fee assessments. On July 17, 2012, FCC released a regulatory fee reform Notice of Proposed Rulemaking (NPRM) proposing changes to FCC’s regulatory fee program related to many issues raised in this report. On average over the past 10 years, FCC collected 2 percent more in regulatory fees than it was required to collect. Prior to fiscal year 2008, FCC’s annual appropriations stated that any excess regulatory fees remained available until expended; since 2008, FCC’s annual appropriations have prohibited the use of any excess fees from the current year or previous years without an appropriation by Congress. As a result, $66 million in excess fees currently resides in an account at the Department of Treasury that cannot be used without congressional action. The account has increased by an average of $6.7 million per year for fiscal years 2006 through 2011. Congress has not provided for the disposition of these accumulating excess funds. Approaches of other fee-funded regulatory agencies could be instructive as FCC considers reforms. For example, the Nuclear Regulatory Commission, Federal Energy Regulatory Commission, and Canadian Radio-television and Telecommunications Commission assess fees based on an annually or biennially updated analysis of costs by industry sector. Regarding excess fees, officials at five other fee-funded regulatory agencies stated that their agencies either apply excess fees as an adjustment to the subsequent year’s fees or refund them. Congress should consider whether FCC’s excess fees should be appropriated for FCC’s use or, if not, what their disposition should be. FCC should perform an updated FTE analysis and require at least biennial updates going forward; determine whether and how to revise the current fee schedule, including the number of fee categories; increase the transparency of its regulatory fee process; and consider the approaches of other fee-funded regulatory agencies. FCC agreed with GAO’s recommendations.
Hurricanes Katrina and Rita caused extensive human suffering and damage in Louisiana, Mississippi, and Texas. Hurricane Katrina made landfall in Mississippi and Louisiana on August 29, 2005, and alone caused more damage than any other single natural disaster in the history of the United States. Hurricane Katrina destroyed or made uninhabitable an estimated 300,000 homes—more than three times the total number of homes destroyed by the four major hurricanes that hit the continental United States in August and September 2004. Hurricane Rita followed on September 24, 2005, making landfall in Texas and Louisiana and adding to the devastation. Hurricane Katrina alone caused $96 billion property damage. Voluntary organizations have historically played a large role in the nation’s response to disasters. These organizations raised more than $3.4 billion in cash donations in response to the Gulf Coast hurricanes as of February 2006, according to the Center on Philanthropy at Indiana University. The American Red Cross raised more than $2.1 billion, about two-thirds of all dollars raised. The Salvation Army raised the second-highest amount, $325 million, Catholic Charities raised about $150 million, and the Southern Baptist National Convention raised about $20 million. Voluntary organizations’ roles in responding to disasters can vary. Some, including the American Red Cross and the Salvation Army, are equipped to arrive at a disaster scene and provide immediate mass care, including food, shelter, and clothing, and in some circumstances, emergency financial assistance to affected persons. Other voluntary organizations focus on providing longer-term assistance, such as job training, scholarships, or mental health counseling. In addition, churches and other community organizations that do not traditionally play a role in disaster response may begin providing these services. For example, many small churches and other organizations provided sheltering services after the Gulf Coast hurricanes. Since its founding in 1881, the Red Cross has offered humanitarian care to the victims of war and devastating natural disasters. The organization is a private nonprofit entity but, since 1905, has had a congressional charter. Under the congressional charter the purposes of the Red Cross are to provide volunteer humanitarian assistance to the armed forces, serve as a medium of communication between the people of the United States and the armed forces, and provide disaster prevention and relief services. Although it is congressionally chartered, the Red Cross provides these services as a private organization. Following a disaster, the Red Cross serves as a direct service provider to disaster victims. In this capacity, the organization provides services that include feeding, sheltering, financial assistance, and emergency first aid. After Hurricanes Katrina and Rita, the Red Cross estimated that it provided more than 3.7 million hurricane victims with financial assistance, 3.4 million overnight stays in almost 1,100 shelters, and more than 27.4 million hot meals and 25.2 million snacks. According to the Red Cross, its efforts after Hurricanes Katrina and Rita were larger than for any previous disaster relief effort. For example, the Red Cross provided more than six times the number of shelter nights after Katrina and Rita than it did in the entire 2004 hurricane season, when four major hurricanes—Charley, Francis, Ivan, and Jeanne—struck the continental United States in August and September. The NRF is a guide to how the nation conducts all-hazards disaster response, including support for voluntary organizations providing shelter, food, and other mass care services. The NRF revises the nation’s prior disaster plan, the NRP, which was originally signed by major federal government agencies, the Red Cross and NVOAD in 2004. Major federal government agencies, the Red Cross, NVOAD, and other voluntary organizations are included in the NRF. The NRF is designed on the premise that disaster response is generally handled by local jurisdictions. In the vast majority of disasters, local emergency personnel, such as police, fire, public health, and emergency management personnel, act as first responders and identify needed resources to aid the community. Local jurisdictions can also call on state resources to provide additional assistance. The federal government responds to state or local requests for assistance when an incident occurs that exceeds state or local response capability or when an incident falls within its own response authorities. In such situations it may use the National Response Framework to involve all appropriate response partners. The primary authority under which the federal government provides assistance to states after a disaster is the Robert T. Stafford Disaster Relief and Emergency Assistance Act (Stafford Act). It authorizes the President to issue a major disaster or emergency declaration when a state’s resources are overwhelmed and the governor makes a request for federal assistance. Under the Stafford Act, the federal government provides assistance for mass care, debris removal, restoration of facilities, and financial aid to families and individuals, among other activities. After disasters that result in extraordinary levels of mass casualties or damage, called catastrophes, the federal government can invoke the Catastrophic Incident Annex of the NRF. The Annex does not assume that local governments—which may no longer be functioning— will ask for assistance, but rather that the federal government will provide resources to the local level before being asked. In addition to outlining the organizational structure used to respond to disasters, the National Response Framework designates 15 emergency support functions. ESF-6 creates a working group of key federal agencies and voluntary organizations to coordinate federal assistance in support of state and local efforts to provide: mass care, including sheltering, feeding, and emergency first aid; emergency assistance, such as coordination with voluntary organizations; reunification of families; pet evacuation and sheltering; support to specialized shelters; and support to medical shelters; housing, both short- and long-term; and human services, such as counseling and processing of benefits. The NRF assigned FEMA to be the primary agency for a new component of ESF-6, called emergency assistance, to ensure that immediate needs that are beyond the scope of traditional mass care are addressed. Emergency assistance adds new expectations for coordination with voluntary organizations by the ESF-6 working group, stating that the group works with non-governmental and faith-based organizations to facilitate an inclusive, coordinated response effort. In addition, the emergency assistance component includes the expectation that a National Shelter System (NSS) will provide data from shelters. The NSS is a Web-based system that provides information on shelter facilities, capacity, and population counts. In addition to its role as a service provider, the Red Cross has specific responsibilities as a support agency under ESF-6. ESF-6 specifies that these activities are separate from its role as a direct service provider. The Red Cross announced in January 2008 that it planned to make significant layoffs to its staff at the Red Cross national headquarters. These layoffs could potentially have implications for the Red Cross’ capacity to meet its NRF responsibilities. However, the Red Cross had not announced details of these layoffs as of mid-February 2008. Figure 1 describes the Red Cross’ roles as a service provider, and in ESF-6. Estimates place the population of individuals with disabilities in the United States at nearly 20 percent of the entire population, and the percentage of people over age 80 with disabilities at 72 percent. Although there are few statistics on the impact of Hurricane Katrina on the disabled, the White House report on the federal response to Katrina estimated that over two-thirds of the 1,300 victims who died were over age 60. Individuals with disabilities are a diverse group, including those with disabilities affecting individuals’ functioning in a number of different ways. For example, some disabilities, such as paraplegia, affect individuals’ mobility, and other disabilities, such as deafness, affect communication. Many of these disabilities can be prepared for and accommodated in general population shelters. For example, with modifications to existing facilities, many mobility impairments can be addressed. These modifications can include ensuring accessible routes for people with wheelchairs, crutches, or walkers from sleeping quarters to dining areas and toilet/bathing areas, ramps, and handrails in toilet facilities. Modifications for communication- related disabilities can include braille signs for the blind. State and local governments operate medical shelters for those individuals with serious medical needs, including some disabled individuals. On October 4, 2006, Congress passed the Post-Katrina Emergency Management Reform Act of 2006. That Act elevated FEMA’s status within the Department of Homeland Security, enhanced its organizational autonomy, and redefined its role. It provided that FEMA’s primary mission is to reduce the loss of life and property and protect the United States from all hazards by leading efforts to prepare for, respond to, and recover from natural disasters, acts of terrorism, other man-made disasters, and catastrophic incidents. In partnership with state, local, and tribal governments, emergency response providers, the private sector, and nongovernmental organizations as well as other federal agencies, FEMA is tasked with building a national system of emergency management. The Act included a number of provisions that should provide a new focus on assistance to individuals with disabilities in connection with these efforts. It directs the Administrator of FEMA to appoint a Disability Coordinator who is required to report directly to the Administrator to ensure that the needs of individuals with disabilities are being properly addressed in emergency preparedness and disaster relief, and assigns a detailed set of responsibilities to the Coordinator. The Post-Katrina Act provides authority for FEMA to address the needs of individuals with disabilities by adding the Americans with Disabilities Act’s definition of “individual with a disability” to the Stafford Act and requires that the FEMA Administrator develop guidelines concerning the provision of services to individuals with disabilities in connection with emergency facilities and equipment. The Post-Katrina Act adds individuals with disabilities and those with limited English proficiency to the discrimination prohibition provisions of the Stafford Act and directs FEMA to work with state and local governments to identify critical gaps in regional capabilities to respond to populations with special needs. The Public Assistance program provides assistance primarily to state and local governments to repair and rebuild damaged public infrastructure and includes activities such as removing debris, repairing roads, and reconstructing government buildings and utilities. Specifically, applicants submit requests for work that are considered for eligibility and subsequent funding. FEMA obligates funds for approved projects, providing specific amounts to complete discrete work segments on projects, while state and local governments pay the remainder based on the state’s cost share agreement with FEMA. As of March 16, 2007, FEMA has obligated about $4.6 billion to Louisiana and about $2 billion to Mississippi through its Public Assistance program. Under the Public Assistance program, state and local governments can reimburse voluntary organizations for several types of expenses. First, they can be reimbursed for facility damage if they meet certain eligibility criteria such as being an educational, medical or custodial care facility. Second, voluntary organizations can be reimbursed for evacuation and sheltering expenses (such as increased utility expenses, cots, and food). The Post-Katrina Act expanded the universe of voluntary organizations eligible for reimbursement for facilities damage after future disasters. Private non-profit facilities that serve certain specified functions (education, utility, irrigation, emergency, medical, rehabilitation, and temporary custodial care) as defined by the President, no longer need to provide essential services of a governmental nature to the general public in order to be eligible for reimbursement. The Act also added another group of private nonprofit facilities potentially eligible for assistance by defining the term to include any facility providing essential services of a governmental nature to the general public (including museums, zoos, performing arts facilities, and community arts centers), as defined by the President. The facilities in this group are similar to those identified in FEMA regulations. Under the Public Assistance program, the federal government typically pays 75 percent of costs, and state governments pay 25 percent, however, after Katrina the federal government paid 100 percent of the cost-share requirement in 45 states that sheltered evacuees. FEMA replaced the American Red Cross as the primary agency for mass care in large part because the two organizations agreed that the primary agency needs to be able to direct federal resources. Although the Red Cross’ specific responsibilities under the NRF have largely remained the same, one change is that the Red Cross will no longer be expected to report data for all shelters, only Red Cross shelters. The changing roles of the Red Cross and FEMA present several implementation issues. With respect to sheltering, the NRF includes the expectation that a national shelter system will be developed to collect and report shelter data. FEMA and the Red Cross have developed an initial system for collecting and reporting data on shelters, but FEMA is still working to develop a federal shelter database. Furthermore, some states have indicated that they are concerned about their ability to collect and report data from non-Red Cross shelters. In addition, the NRF places increased responsibility on FEMA for coordinating with voluntary organizations, but FEMA does not have sufficient staff resources to meet this responsibility. Last, although FEMA has made progress, its efforts to identify and fill gaps in mass care capabilities are not yet complete. The Red Cross and FEMA agreed in February 2007 letters that because the Red Cross cannot legally direct federal resources, FEMA is better positioned to be the primary agency for ESF-6 mass care. The letters indicated that the primary agency for mass care should be able to direct federal resources in response to state requests for assistance, which the Red Cross—as a nongovernmental entity—does not have the legal authority to do. The Red Cross’ inability to direct federal resources after the Gulf Coast hurricanes contributed to problems that we highlighted in our June 2006 report. After Katrina, the Red Cross could not go directly to federal agencies for resources to fulfill requests for assistance, but instead had to request these items through FEMA, which then directed the appropriate federal agencies to supply the needed materials or services. This resulted in confusion about roles and led to duplicative requests. In the February 2007 letters, the Red Cross and FEMA also agreed that the expansion of ESF-6 to include a new function—emergency assistance— provided another reason why FEMA should be the primary agency for mass care. The primary agency for mass care will need to coordinate mass care activities with the primary agency for emergency assistance— FEMA—and having different primary agencies could make this more difficult. For example, Red Cross and FEMA officials told us that Red Cross is not knowledgeable about activities in the emergency assistance function, which would make it difficult for them to coordinate these activities with mass care. FEMA and the Red Cross agreed that having FEMA serve as the primary agency for all four functions of ESF-6 would help ensure a unified command structure during operational response. Although the Red Cross role for mass care under the NRF will shift from that of a primary agency to a support agency, its specific responsibilities will largely remain the same as under the NRP. For example, the organization still provides staff to work at DHS offices to support ESF-6 activities and supports DHS in working with state agencies for mass care in planning and preparedness activities. However, the Red Cross will no longer have two key responsibilities that it had under the NRP. First, the Red Cross will no longer be responsible for filling out requests from states and other local organizations for federal assistance after a disaster and sending them to FEMA. This activity will now be performed primarily by states. Under the NRF, the Red Cross will provide guidance to states as they determine their needs for federal assistance. FEMA did tell us, however, that in some rare circumstances the Red Cross may fill out requests independently of states. States also filled out these requests under the NRP—along with the Red Cross—and state officials that we interviewed told us that they were familiar with this process. Second, the Red Cross will no longer be responsible for reporting data on the number and characteristics of people in shelters that are operated by organizations other than the Red Cross. After Katrina, the Red Cross was responsible for reporting data on all shelters to FEMA, including those operated by other organizations, but both FEMA and the Red Cross reported problems with this process. Now, states are responsible for reporting data on non-Red Cross shelters to FEMA. The shifting ESF-6 roles of the Red Cross and FEMA present several implementation issues for FEMA, including reporting shelter data, coordinating with voluntary organizations and identifying and filling gaps in mass care capabilities. In its role as primary agency, FEMA has made progress toward meeting NRF expectations for an NSS, but still faces several challenges. An initial NSS that is owned and was paid for by the Red Cross, with FEMA as a partner agency, is currently operational. However, FEMA is still working to develop a federal NSS that will be owned and housed at FEMA. When the federal NSS is complete, the Red Cross will enter and verify data for Red Cross shelters, and states will enter and verify data for all other shelters. FEMA officials told us that the federal NSS will be finished in spring 2008. Although the current version of the NSS can provide information on shelter location, capacity, population, physical accessibility for people with disabilities, and managing agency, the system cannot track demographic data on the types of populations residing in shelters. FEMA officials told us that FEMA is working to address this and other issues that have been identified by states in the federal NSS. For example, states identified the need for integrating Geographic Information Systems (GIS) into the system to provide data that are more accurate. FEMA told us that it would incorporate these elements into the updated system. In addition, many states still need to enter data into the system in preparation for disasters. FEMA officials said that as of November 2007, no more than four states had inserted shelter location data and, as a result, most of the data in the system is on Red Cross shelters. The accuracy of the shelter data is contingent upon states reporting information into the system and updating it frequently, according to FEMA officials. Some state officials told us that they had just recently received training on NSS and were currently in the process of compiling the data needed. FEMA has offered states the opportunity to have FEMA staff help include non-Red Cross shelter data in the NSS after a disaster until NSS implementation is complete. FEMA officials told us that it will take 2 to 3 years to fully implement the federal NSS, because of training and time needed for states to collect, input, and verify data. During the 2007 California wildfires, FEMA deployed staff to help state officials collect and report data from non-Red Cross shelters with the NSS because California officials had not yet entered shelter data into the system. California officials said that the NSS was useful because it gave a single, accurate report on the shelter population. State officials we spoke with told us that they could collect shelter data from pre-planned shelters, but officials in some states were concerned about their capacity to collect and report data from unplanned shelters that are likely to open after a major disaster. These shelters are likely to open if designated shelter sites are overcrowded, evacuees are unable to reach designated sites, or the designated sites are affected by the disaster. Officials from some states told us that they do not have a mechanism in place to collect data from the small, independent organizations that typically open these shelters. In contrast, officials from another state told us that they do not anticipate the need for unplanned shelters to open after a major disaster, and, as a result, are not concerned about collecting these data. Collecting data on unplanned shelters was a significant challenge after Hurricane Katrina. There was no centralized system in place for collecting and reporting these data after Hurricane Katrina and, as a result, these data often went unreported, according to FEMA and Red Cross officials. Because government and voluntary organizations did not know where many of these people were staying, this led to problems planning for and delivering needed resources. Changes in FEMA’s role under ESF-6 also present implementation issues with respect to coordination with voluntary organizations. The NRF includes a new component on voluntary organization coordination requiring that the ESF-6 working group—for which FEMA is the primary agency—coordinate federal response efforts with state, local, private, non-governmental, and faith-based organizations’ efforts. As the primary agency for ESF-6, FEMA will be primarily responsible for addressing these issues. These requirements for coordination with voluntary organizations are more extensive and specific than in the NRP, and FEMA officials told us that FEMA voluntary agency liaisons (VAL) will fill this role. VALs are FEMA staff members who coordinate the activities of voluntary organizations with FEMA. Most FEMA VALs are based in FEMA regions and work with state and local voluntary organizations, and the regional offices of national voluntary organizations (see app. III for a job description for VALs). While the NRF calls for an enhanced FEMA role in helping coordinate voluntary agency assistance, FEMA does not have the staff resources necessary to meet this objective. As of July 2007, each FEMA region had one full-time VAL who could work on the entire range of coordination issues with voluntary organizations, as shown in figure 2. FEMA regions can include up to eight states. FEMA VALs are tasked with coordinating FEMA activities and policies with voluntary organizations across their regions and building the capacity of these organizations according to voluntary organization and FEMA officials. Effective VALs build relationships and network, however, many officials from voluntary organizations and multiple senior FEMA VALs told us that there are not enough full-time VALs for them to develop strong relationships in all of the areas covered. For example, one of the primary responsibilities of VALs is to improve coordination with state- and local-level voluntary organizations, but officials from FEMA and voluntary organizations said that in many states coordination between these organizations and government is weak. In addition, officials from some voluntary organizations told us that VALs have so much work it is difficult to communicate with them. Officials from voluntary organizations also said that there were not enough VALs after disasters. During the response to disasters, VALs can be pulled out of their own regions to assist in disaster- affected areas. For example, after Katrina, VALs from across the country were brought to the Gulf Coast. As a result, during Katrina these VALs were not available to respond to their own smaller scale regional disasters, even though they had built relationships with voluntary organizations in those states. At the time of Katrina, FEMA was providing states with assistance for 38 other disasters across the nation. Disaster research experts told us that there should be additional FEMA VALs in each region. FEMA officials told us that there are no plans to change the current staffing structure for VALs. A review of the response to Katrina by the DHS Office of the Inspector General (OIG) identified broader problems with human capital management at FEMA. For example, the DHS OIG found that FEMA does not have staff or plans adequate to meet its human capital needs during catastrophic disasters. FEMA has two other types of VALs, reserves and Katrina VALs. However, the job responsibilities of these individuals constrain them from performing many VAL job duties. FEMA had 85 reserve VALs that it can call upon in response to major disasters, and 36 Katrina VALs as of December 2007. The reserve VALs are only activated during disasters, however, and are not available to network and build the capacity of voluntary organizations during preparedness. Furthermore, the Katrina VALs are designated specifically to address Katrina-related issues and FEMA is not planning to retain these individuals after Katrina-related work is finished. In addition, VALs do not receive role-specific training and, as a result, some VALs have not been fully prepared for their duties. The lack of specialized training has resulted in VALs not always being prepared to coordinate FEMA activities with the voluntary sector. For example, VALs do not receive any training on how voluntary organizations can receive reimbursement for their mass care activities during disasters. One voluntary organization official that we spoke with said that, while some VALs were very helpful in that they had access to information and resources that they would not have had otherwise and understood FEMA policies, other VALs were not familiar with key FEMA Public Assistance policies for the reimbursement of voluntary organizations. A senior FEMA official told us that FEMA has completed a VAL Handbook and is preparing to develop a pilot training for VALs. The DHS OIG also found that FEMA does not have an organized system of employee development. FEMA’s broad new responsibilities under the Post-Katrina Act, and FEMA’s new role as the primary agency for mass care, also present implementation issues for FEMA with regard to identifying and filling gaps in mass care capabilities. Although FEMA has taken several steps to address these issues, FEMA’s efforts are not yet complete. For example, the Post-Katrina Act specifically requires that FEMA identify gaps in mass care capabilities at the state level. In response, FEMA has undertaken a gap analysis initiative that examines, by state, the gaps in disaster preparedness. This initiative, which began in 2007, has begun identifying gaps in hurricane-prone states along the Eastern seaboard and Gulf Coast. A FEMA official responsible for these efforts told us that the initial gap analysis had been completed in 18 high-risk states as of December 2007. Eventually, FEMA plans to roll this initiative out in every state, and to make it all-hazards rather than hurricane-specific. FEMA officials told us that they are also working to identify resources for situations in which the mass care capabilities of government and voluntary organizations are exceeded, but that FEMA is still working to develop a standardized system for coordinating these resources. FEMA officials told us that FEMA has developed contracts with private companies for mass care and other disaster resources for situations in which federal capabilities are exceeded. After Katrina, FEMA made four noncompetitive awards to companies for housing services. These contracts have since been broadened through a competitive process so that if a disaster struck now they could also include facility assessment for shelters, facility rehabilitation—including making facilities accessible—feeding, security, and staffing shelters. The FEMA official in charge of these contracts said that contractors had assessed facilities to determine whether they could be used as shelters in the Gulf Coast during the summer of 2007. He said that these contracts gave the federal government the option of purchasing whatever resources it needs in response to disasters. FEMA officials told us, however, that they prefer using federal resources when possible because contract services are more expensive. Another round of contracts will be awarded in May 2008 on a competitive basis. However, FEMA is still working to standardize training, resources, and terminology across the many different organizations—including the private sector—involved in disaster response to improve coordination among these organizations. FEMA is working to develop standardized training that could be provided to staff from all of these organizations. FEMA is currently working with the Red Cross to develop a standardized training based on current Red Cross training, according to a FEMA official responsible for these efforts. Having standardized training could, for example, make it easier for employees of organizations providing services contracted by the federal government to work in shelters operated by other organizations. A key FEMA official said that this standardized training should be complete by summer 2008. FEMA is also working to standardize disaster relief resources and terminology across the providers of mass care services. The FEMA official said that this allows disaster service providers to communicate more readily, and to share resources across organizations when necessary. NVOAD is assisting FEMA by coordinating efforts among voluntary organizations to standardize the types of resources used in disaster response. FEMA and NVOAD officials told us that having organizations use the same language and resources makes it easier to scale up disaster response operations. NVOAD is in a unique position to coordinate voluntary organizations active in disaster assistance under ESF-6. NVOAD brings together voluntary organizations with diverse objectives and sizes under one organization. Moreover, NVOAD does not compete with its members for funds, since it is not a direct service provider. While NVOAD has facilitated relationship building among its members prior to disasters, its coordination efforts in responding to Hurricanes Katrina and Rita were not an effective way of providing key information. Due to staff limitations, the organization was unable to fully meet its information-sharing responsibilities under ESF-6 during the Gulf Coast hurricanes. Using lessons learned from Katrina, NVOAD has identified ways to potentially improve information sharing with its members, such as through enhanced use of web technology. For several reasons, NVOAD is well positioned to coordinate voluntary organizations active in disaster assistance under ESF-6. First, NVOAD is a coordinating agency, not a direct service provider. This means NVOAD does not compete with its members for funds. Instead, the organization is primarily funded by member organizations. Second, NVOAD brings together voluntary organizations with diverse objectives, and sizes. NVOAD brings together organizations that provide various types of disaster response and recovery services, such as sheltering, feeding, home-building, and case management services, as well as both secular and faith-based organizations. Officials from member organizations told us that NVOAD helps them prepare for disasters by developing relationships with other individuals active in disaster response and recovery. These officials told us that developing these relationships is a critical part of preparing for disasters, and that NVOAD provided an opportunity to get to know officials from other organizations. Although members we spoke with noted that NVOAD’s efforts were useful in providing opportunities for networking and collaboration, some of the larger and older members maintained that the organization does not represent their needs well. For example, officials from one member organization told us that NVOAD is increasingly serving the needs of new, start-up disaster response organizations, rather than focusing on its larger members. NVOAD’s executive director said that one strength of the organization is that it gives smaller members representation in ESF-6. NVOAD has historically helped organizations prepare for disaster response through relationship building, but as shown in table 1, the NRF also includes responsibilities for NVOAD in disaster response, in addition to disaster preparedness. NVOAD’s ESF-6 roles and responsibilities have remained the same as those specified in the NRP, and include information-sharing and convening voluntary organizations, but do not include directing the activities of its members. NVOAD fulfills its ESF-6 information-sharing role in several ways. First, NVOAD provides information about its members’ services to FEMA, such as where its members are operating and what services they are providing. One FEMA official said that having NVOAD report information for all of its members made it easy to get updates from the voluntary sector. Second, the NVOAD organization structure provides a system for coordination after disasters. NVOAD includes a number of committees composed of NVOAD member organizations that address key mass care issues after disasters, such as managing donations and long-term recovery. For example, after the 2007 California wildfires the donations management committee immediately met with state officials to identify warehouse space to store goods donated by the private sector until they were needed. Third, NVOAD shares information with voluntary organizations about the situation on the ground and services being provided by different organizations after disasters. For example, after Katrina, NVOAD hosted daily conference calls for several months after Katrina to coordinate with its members. These conference calls provided situation updates, brought new organizations up to speed on the basics of disaster response, and gave organizations a forum to share information and collaborate with each other. We found that these conference calls were not an effective way of communicating after the hurricanes. The conference calls included NVOAD members, federal agencies, and voluntary organizations that were not NVOAD members, some of which were new to the disaster response field. FEMA officials provided information on the situation on the ground and explained how FEMA was providing assistance. We participated in one conference call and found that it was difficult to follow. It was challenging to identify which region of the disaster zone speakers were discussing, members were discussing different issues that were not relevant to everyone on the call, and there were too many people on the call. NVOAD members with whom we spoke identified similar concerns about the effectiveness of the conference calls. NVOAD’s executive director said that there were often 75 to 100 people on a single conference call after Katrina. Some NVOAD members also told us that the conference calls often ran long, which could get in the way of effectively meeting hurricane victims’ needs. Figure 3 shows the flow of information during NVOAD phone calls. NVOAD’s executive director at the time of Katrina said that NVOAD was limited by staff resources and, as a result, couldn’t do more than provide conference calls. During Hurricanes Katrina and Rita, NVOAD had one staff person. NVOAD currently has two staff persons: an executive director and an administrative position. NVOAD’s fiscal year 2006 operating budget was about $270,000, and NVOAD relies primarily on funds from its members, According to NVOAD’s current executive director. NVOAD dues currently range from $3500 per member each year for its largest members to $750 for its smaller members, according to the executive director. Since the 2005 Gulf Coast hurricanes, NVOAD has increased its membership from 40 to 49, and the organization is currently considering increasing membership further. NVOAD’s current executive director told us that the organization of the conference calls after Katrina was not an effective way to communicate with its members. NVOAD has identified ways to potentially enhance information sharing with its members. The current executive director told us that better use of web technology would allow NVOAD to provide members with more timely disaster updates and information about member services on the ground. NVOAD members that we spoke with told us that it would be helpful if NVOAD used web technology to provide certain information so that they wouldn’t need to participate in lengthy conference calls. One voluntary organization official suggested that key information could be provided online, such as updates about the situation on the ground, information about what organizations are operating in the disaster zone, and what services are being provided by those organizations. However, the executive director said that improving the organization’s use of Web technology would require additional resources. FEMA has started addressing the problems with mass care services for the disabled that occurred after Hurricanes Katrina and Rita. Various assessments of FEMA’s performance after the hurricanes identified needed improvements by FEMA in two areas: providing guidance to assist states and others in planning to better meet the needs of the disabled, and increasing the participation of people with disabilities and subject-matter experts in the planning process. The Post-Katrina Act included requirements in each area, and FEMA has taken actions in both of these areas. For example, in response to the Act, FEMA hired a Disability Coordinator to integrate disability issues into federal emergency planning and preparedness efforts. However, FEMA has generally not coordinated with NCD as required by the Act, which could result in disability-related concerns not being fully addressed. After the 2005 Gulf Coast hurricanes, reports from the Senate Committee on Homeland Security and Governmental Affairs, DHS, and NCD identified a lack of planning as one of the most significant problems related to the provision of mass care to the disabled. For example, FEMA’s Nationwide Plan Review, released in June 2006, reviewed the planning efforts of states and major urban areas. The report found that “One of the most serious deficiencies uncovered in the Review was inadequate planning for special needs populations,” and that no state or urban area was found to have sufficiently planned for these populations. The Nationwide Plan Review also recommended several specific steps that FEMA should take to help state and local governments with such planning: develop a consistent definition of “special needs” to clarify state planning efforts, help local governments plan by providing guidance on disability-related increase the participation of people with disabilities and subject-matter experts in the planning and preparedness process. In addition to recommending actions to be taken by FEMA, the Nationwide Plan Review also found that states need stronger accountability for the provision of mass care to people with disabilities. The review concluded that states should develop standards for the care of individuals with disabilities, with an emphasis on ensuring that accessibility for persons with disabilities is a priority factor in selecting emergency shelter sites. FEMA has taken several steps to help improve planning for the disabled population. For example, FEMA developed a consistent definition of the term “special needs” that is used in the NRF. The Nationwide Plan Review said that at the time of Katrina the term lacked the specificity needed for emergency managers to accurately determine the capabilities necessary to respond to community needs. Through a working group of stakeholders, FEMA developed a definition of special needs that refers to those who may have additional needs before, during, or after an incident in one or more of the following functional areas: maintaining independence, communication, transportation, supervision, and medical care. For example, hearing-impaired individuals would be categorized as those needing assistance with communication. FEMA is also developing guidance for states as they plan for serving disabled populations. One such initiative has been developing guidance on collecting data on disabled populations, which was expected to be released in December 2007 according to a FEMA official. This guidance will respond to the Nationwide Plan Review’s recommendation that the federal government help state and local governments incorporate disability-related demographic analysis into emergency planning. In addition, in September 2007, FEMA released target capabilities that define the disaster response capabilities that states should have, including capabilities for the disabled. For example, the document includes a capability that states should “Develop plans, policies, and procedures to ensure maximum retention of people with disabilities in general population shelters.” A second phase of the target capabilities project will include capabilities that states should have for populations that require medical care. The Post-Katrina Act required that FEMA take steps to include people with disabilities, and subject-matter experts in the field, in planning and preparedness efforts, as recommended by the Review. FEMA appointed a Disability Coordinator, as required by the Act, who began work for FEMA in the summer of 2007. FEMA officials told us that this individual has begun working across FEMA to include disability-related concerns in FEMA initiatives, and with disability organizations to ensure that their concerns are addressed. For example, the Coordinator has been involved in the drafting of the NRF according to a FEMA official. In addition, the Coordinator was on the ground in California to assist with meeting the needs of individuals with disabilities after the wildfires in the fall of 2007. For example, the Coordinator worked to ensure that information and materials disseminated to the public were in alternative formats. However, FEMA has generally not coordinated with NCD, as required by the Act. The Act requires FEMA to coordinate with NCD in the implementation of several different initiatives as shown in figure 4. NCD and FEMA officials told us that NCD had not been consulted for many of these initiatives. For example, NCD was not consulted about the Comprehensive Assessment System, which assesses the nation’s prevention capabilities and overall readiness. FEMA officials who work on this initiative said that they had not consulted directly with NCD, but were coordinating with the officials within FEMA who are knowledgeable about disability issues. Other FEMA officials said that NCD has provided public comment on the NRF and other key FEMA documents. Officials from NCD said that there has been little coordination with FEMA and that they had not been offered the chance to provide input on a number of these initiatives. As a result, disability-related issues may not be fully addressed. In the Nationwide Plan Review, FEMA reported that it is important to include the disabled in planning because it provides responders with hands-on experience about the needs of people with disabilities in disaster situations, and provides planners with the ability to test their plans and modifications. The two organizations have met several times to discuss how coordination would occur, most recently in October 2007. However, as of January 2008, the agencies had not agreed to specific action steps for how they would coordinate. In response to requirements of the Post-Katrina Act, FEMA has also taken steps to address the need for greater state accountability for the mass care needs of individuals with disabilities. The Act requires that, as part of FEMA’s gap analysis initiative, FEMA identify gaps in response capabilities for special needs populations at the state level. The template used by state and federal planners to identify gaps requires a substantial amount of information about special needs sheltering. For example, one of the indicators of readiness is whether states have formulas established for estimating the number of special needs evacuees who will require public shelter. In response to Post-Katrina Act requirements, FEMA also released guidance in August 2007 on accommodating disabled individuals. The guidance identifies laws that apply to nonprofits involved in disaster response and provides short summaries of each law. The guidance does not provide tools that states and nonprofits can use to implement these requirements. FEMA is planning to release additional guidance to provide state and local officials with additional information to improve sheltering for individuals with disabilities. In July 2007, the Department of Justice, which enforces the Americans with Disabilities Act (ADA), released detailed operational guidance for accommodating disabled populations in emergency shelters. This guidance provides a checklist that can be used to evaluate the accessibility of potential shelter sites. The checklist includes detailed questions that could assist shelter managers in evaluating shelter sites, such as whether there is an accessible route from shelter living space to the shelter’s health and medical facilities. FEMA’s August 2007 guidance includes a Web site link to the Department of Justice guidance. The Red Cross has taken several steps to address problems that occurred after the Gulf Coast hurricanes in meeting the mass care needs of disabled individuals. These problems included a lack of appropriate intake procedures, resulting in some disabled individuals being turned away from Red Cross shelters, and a lack of accessible shelter facilities. For example, in some shelters medical units were located on upper floors or other inaccessible areas, and individuals with mobility impairments were not provided with accessible alternatives. In response to such problems, the Red Cross has developed an intake form intended to assist volunteers in determining whether a particular shelter can meet an individual’s needs and also developed new training on serving the disabled. However, the Red Cross continues to face challenges in this area: Red Cross officials said that local chapters have considerable autonomy within the organization and that it can be difficult to encourage chapters to implement accessibility policies. Other major national voluntary organizations that we examined had increased their attention to services for the disabled, but did not identify a need to improve their services for this population. We did not identify concerns with the services of these organizations. After Hurricane Katrina, officials from the government and disability organizations identified two main concerns with the mass care services provided by the Red Cross to individuals with disabilities. The first was that some Red Cross shelter managers did not use shelter intake procedures that would have enabled them to identify individuals’ specific disabilities and determine whether the shelter could serve those individuals. As a result, many individuals with disabilities were sent to medical shelters, which could result in families being split up or placing greater demands on the more resource intensive services provided in medical shelters. The Red Cross, in partnership with the Department of Health and Human Services, has developed a shelter intake form to address this problem after future disasters. The form provides a series of questions for shelter workers in general shelters to ask incoming evacuees (see app. IV for the shelter intake form). The form will allow shelter managers to identify disabilities and determine whether the shelter can meet the individual’s needs, according to officials from the Red Cross and the Department of Health and Human Services. NCD officials told us that they think the form will help shelter managers make good decisions about whether individuals with disabilities can enter a shelter. The Red Cross distributed the form to its chapters along with guidance, but the form was often not used after the California wildfires in Red Cross shelters. Red Cross officials said that procedural changes like this often take time to be fully implemented in chapters. Officials from California also said that the form was not used in some cases because it took too long to fill out. “I have told Cajundome officials, medical staff, and Red Cross personnel about this problem. But I have been unsuccessful in getting it resolved. I have seen many frail people struggle to climb or descend the stairs in order to get medical attention, and I have personally seen two very exhausted men in wheelchairs almost decide to forego triage or other medical attention because of the difficulty of accessing this unit.” Other frequent concerns were that accessible shower and restroom facilities were not provided, and that individuals with training to serve disabled individuals were not permitted in Red Cross shelters. NCD and other disability organizations have reported that these problems and others existed prior to Katrina. Officials from the Red Cross national headquarters told us that the Red Cross is required to comply with the ADA and, therefore, its chapters must make plans and take actions so that individuals with disabilities can stay in Red Cross shelters. Red Cross officials said that the only individuals who are not able to stay at Red Cross shelters are those with serious medical needs, and that the organization does not have the ability to serve these individuals. They said that this policy was in place at the time of Katrina and Rita. Federal officials and disability advocates agreed that there are some individuals who are not able to stay at Red Cross shelters because their needs are too serious. Red Cross officials also said that the Red Cross does not own the facilities that it uses for sheltering in a disaster, and that not every building that is large enough to shelter a community and withstand a disaster was constructed in accordance with current accessibility standards. The Red Cross said that it surveys potential shelter facilities prior to disasters and that accessibility to people with disabilities is one of the factors considered when determining whether to use a facility as a shelter. The Red Cross has begun addressing concerns about accessibility of its shelters by developing training for Red Cross employees and volunteers about meeting the needs of individuals with disabilities. The training presents information about Red Cross policies on accessibility and modification requirements for emergency shelters and provides examples of how Red Cross staff could address specific situations. It does not provide specific operational guidance for chapters about how to implement these requirements. The training, which was developed in collaboration with disability advocates, is required for Red Cross workers who have leadership roles in providing mass care after disasters. The training is not required for Red Cross volunteers, although it is recommended for key Red Cross volunteers who respond to disasters anywhere in the nation. In addition, the Red Cross told us that it has prepositioned items that will improve shelter accessibility for individuals with mobility impairments in key warehouses across the country. These items included 8,000 cots that are designed for easy transitions from a wheelchair, commode chairs, and shower stools. Red Cross headquarters officials told us that some local chapters are still not fully prepared to serve individuals with disabilities after disasters. These officials said that, although the Red Cross has taken steps to educate their employees and volunteers since Katrina, it has been difficult to encourage chapters to prepare for and implement accessibility policies. Red Cross headquarters officials said that Red Cross chapters have considerable autonomy within the organization. Officials from the Salvation Army, Southern Baptists, and Catholic Charities told us that these organizations have not made changes to their disaster services for the disabled, although they said that Katrina made them more aware of disability issues. We did not identify significant concerns with their services, however, largely because sheltering—which requires many modifications for individuals with disabilities—is not the focus of these organizations’ services. Instead, these organizations specialize in services such as feeding. One official from a disability organization indicated that meeting specialized dietary needs could sometimes be a disaster-response issue, but that it is a much lower priority than problems with sheltering. Voluntary organizations faced limitations in the scope of program coverage and communication difficulties while trying to obtain reimbursement under the Public Assistance program after Katrina. The Public Assistance reimbursement program was not designed for a disaster of Katrina’s magnitude because it only offered reimbursement to voluntary organizations in the disaster zone, even though evacuees dispersed throughout the country. FEMA has since changed its regulations so that after future disasters voluntary organizations serving evacuees outside of declared disaster zones can be reimbursed. Voluntary organizations also faced significant communication problems as they sought reimbursement, but FEMA has not taken steps to address these communication issues. Some voluntary organizations said that VALs—FEMA’s liaisons to the voluntary sector—could not provide them with information about the Public Assistance program or provided them with the wrong information. FEMA VALs do not receive training on Public Assistance program policies. In addition, we found that some of the information on FEMA’s Web site about the Public Assistance program was not presented in a user-friendly format that would help voluntary organizations successfully navigate reimbursement policies and procedures. As a result of these various communication problems, some organizations said that they never found out about reimbursement opportunities, or got so frustrated with the process that they chose not to apply. At the time of Hurricane Katrina, voluntary organizations were potentially eligible to be reimbursed for mass care expenditures only in areas that were within disaster zones, as declared by the President. Because of the scale of the disaster, however, hundreds of thousands of Gulf Coast residents evacuated to areas of the country outside of the declared disaster zone. Many of these evacuees were sheltered by small local voluntary organizations, such as churches, which were not eligible for reimbursement under Public Assistance policies at the time. On September 9, 2005—about 2 weeks after Katrina made landfall—FEMA issued a memorandum stating that the President had declared an emergency in states receiving Katrina victims. This permitted voluntary organizations in states across the nation that were sheltering evacuees from Katrina to receive reimbursement for mass care expenses. FEMA changed its regulations in July 2006 to allow eligible public and private non-profit entities outside of a declared disaster zone to receive reimbursement for mass care expenses, without the requirement for presidential declarations in each area where disaster victims are sheltered. This change contributed to confusion among voluntary organizations about the Public Assistance program after the hurricanes. Many officials from voluntary organizations told us that changing reimbursement policies caused confusion and made it difficult for them to get reimbursed, and that in some cases they gave up on seeking reimbursement. Although FEMA and affected states took steps to publicize the Public Assistance program, many voluntary organizations did not receive key information. Voluntary organizations reported numerous problems, such as not learning about Public Assistance reimbursement opportunities, not being able to obtain information about how to apply, and not being able to obtain assistance with the application process. Clear and accurate communication was particularly important because many of the voluntary organizations that were providing services had not sought reimbursement for services before. Because organizations did not always receive needed information, some organizations either never found out about reimbursement opportunities, or got so frustrated with the process that they withdrew their applications. FEMA officials told us that they communicate Public Assistance policies to voluntary organizations after disasters in three ways. First, states and FEMA coordinate in convening meetings to make voluntary organizations aware of Public Assistance program reimbursement opportunities. Second, FEMA officials, including VALs, often respond to questions from applicants. Third, FEMA provides information about the Public Assistance program via its Web site. As described in FEMA’s December 2005 review of the response to Katrina, FEMA’s role in publicizing reimbursement opportunities is particularly important after large-scale disasters in which local governments are severely compromised or no longer functioning. There were several problems, however, with FEMA’s efforts to publicize and communicate about the Public Assistance program with voluntary organizations after the Gulf Coast hurricanes. First, because many of the organizations responding to Katrina were small and had not received Public Assistance funding in the past, they often did not find out about briefings on the program. As a result, they missed an opportunity to receive information about being reimbursed. Second, VALs—a key FEMA link to the voluntary sector—were not provided with information about the program. VALs are often in the field working with voluntary organizations providing disaster response services, and are potentially well-positioned to inform these organizations about Public Assistance opportunities and tell them where they can go for additional information. Yet many officials from local voluntary organizations told us that VALs had either not informed them about the program, could not tell them where to get the needed forms, or had provided them with information that was incorrect. For example, one representative of a voluntary organization told us that VALs had not told the organization about reimbursement opportunities, and that when she found out about the program, the VAL could not tell her where to obtain more information. FEMA officials told us that the Public Assistance program has traditionally not worked closely with VALs—who are part of FEMA’s Individual Assistance program, as opposed to the Public Assistance program—to publicize the program. A Public Assistance official said that FEMA has publicized the program through its Web site and state efforts, and that there have been no efforts to work more closely with FEMA VALs since Katrina. FEMA officials told us that there is currently no training for VALs on Public Assistance policies. Several FEMA VALs told us that closer coordination between the program and FEMA VALs would help publicize the program. Finally, our review of FEMA’s Web site, and comments from a number of voluntary organizations, indicate that the Web site was not effective in providing these organizations with the information about Public Assistance opportunities after the Gulf Coast hurricanes. The two Public Assistance reimbursement opportunities that voluntary organizations told us they applied for—reimbursement for mass care and for facilities damage—include different eligibility and procedural requirements for voluntary organizations. Voluntary organization officials told us that they are not accustomed to working with technical policies, and that they needed a clear, step-by-step explanation of the Public Assistance opportunities and requirements. FEMA provided an online fact sheet regarding the opportunity for voluntary organizations to apply for Public Assistance reimbursement for mass care costs several weeks after Hurricane Katrina made landfall. However, the Web site does not include user-friendly information for voluntary organizations about opportunities for reimbursement for facilities damage. In addition, FEMA’s Public Assistance Web site does not include contact information for specific offices or officials who can help organizations develop reimbursement applications for either program. Hurricanes Katrina and Rita brought widespread devastation and challenged all levels of government and voluntary organizations. Using lessons learned from Katrina, FEMA and voluntary organizations have begun taking steps to improve mass care services for future disasters, such as replacing the National Response Plan with the National Response Framework. The NRF includes an enhanced role for FEMA in coordinating with voluntary organizations. FEMA VALs—employees who are FEMA’s primary link to the voluntary sector—will have primary responsibility for this role. However, the size of FEMA’s VAL workforce is not sufficient to meet FEMA’s NRF responsibilities for voluntary agency coordination. Having only one full-time VAL in each region who can work on the entire range of coordination issues with voluntary organizations can limit VALs’ ability to build successful relationships in their states, a critical element of fulfilling their responsibilities. In addition, VALs receive no role-specific training, and no training on a key federal program that reimburses voluntary organizations after disasters. If FEMA does not take steps to address these issues, it will encounter difficulties in meeting its NRF role of coordinating with voluntary organizations, and the nation is likely to see some of the same coordination problems that occurred after the Gulf Coast hurricanes. Under the NRF, NVOAD plays a critical role in sharing disaster information among national voluntary organizations, and FEMA plays an important role in supporting coordination among these organizations. After Hurricanes Katrina and Rita, timely information was important for organizations’ efforts to provide disaster services, but the daily conference calls hosted by NVOAD were an ineffective communication strategy. NVOAD’s executive director has indicated that improving the organization’s communication systems is a priority, but NVOAD has only two staff members and limited funding. Without FEMA’s assistance, NVOAD may not have the technical capacity to adequately assess and improve its communications systems. Unless NVOAD and FEMA work together to systematically assess and expand NVOAD’s information sharing efforts, NVOAD members are likely to face continued communication problems after disasters. FEMA has begun taking actions to improve the mass care services provided to the disabled after disasters, including actions to implement relevant provisions of the Post-Katrina Act. As FEMA noted in the Nationwide Plan Review, it is critical that federal, state, and local governments increase the participation of people with disabilities and subject-matter experts in the development and execution of plans and training. However, FEMA has generally not coordinated with NCD in its efforts to implement relevant provisions of the Act, as required by the Act. Unless FEMA begins working more closely with NCD, emergency planners may not fully incorporate this population’s needs into planning efforts. Small voluntary organizations played a key role in the mass care response to Katrina, but were often unfamiliar with how to navigate these federal reimbursement procedures. Although FEMA has posted the Public Assistance program policies for voluntary organizations on its Web site, the site does not provide key information about opportunities for voluntary organizations to be reimbursed for facilities damage in a user- friendly format. In addition, the Web site does not include contact information voluntary organizations could use to get more information. Unless FEMA provides information in a more user-friendly format, some voluntary organizations may be unable to take advantage of reimbursement opportunities after future disasters, which could be an incentive to stop providing mass care services. To provide greater assurance that FEMA has adequate staff capabilities to support the agency’s enhanced role under the NRF in helping coordinate with voluntary organizations, we recommend that the Secretary of Homeland Security direct the Administrator of FEMA to take action to enhance the capabilities of its VAL workforce, such as: converting some Katrina VALs into full-time VALs able to work on the entire range of coordination issues with voluntary organizations; increasing the number of full-time VALs; or providing role-specific training to VALs, including providing them with information about Public Assistance opportunities and policies for voluntary organizations. To improve NVOAD’s effectiveness in meeting its NRF information-sharing responsibilities after disasters, we recommend that NVOAD assess members’ information needs, and improve its communication strategies after disasters. As part of this effort, NVOAD should examine how best to fund improved communication strategies, which may include developing a proposal for FEMA funding. To facilitate the implementation of improved communication strategies, NVOAD may want to consider strategies for increasing staff support for NVOAD after disasters, such as having staff from NVOAD member organizations temporarily detailed to NVOAD. In addition, in light of FEMA’s enhanced role under the NRF in helping coordinate the activities of voluntary organizations in disasters, we recommend that the Secretary of Homeland Security direct the Administrator of FEMA to provide technical assistance to NVOAD, as needed, as NVOAD works to improve its communication strategies. To ensure that the needs of individuals with disabilities are fully integrated into FEMA’s efforts to implement provisions of the Act that require FEMA to coordinate with NCD, we recommend that the Secretary of Homeland Security direct the Administrator of FEMA to develop a detailed set of measurable action steps, in consultation with NCD, for how FEMA will coordinate with NCD. To help ensure that voluntary organizations can readily obtain clear and accurate information about the reimbursement opportunities offered by the Public Assistance program, we recommend that the Secretary of Homeland Security direct the Administrator of FEMA to take action to make the information on FEMA’s Web site about reimbursement opportunities for voluntary organizations more user-friendly. This could include: developing a user-friendly guide or fact sheet that provides an overview of opportunities for reimbursement for facilities damage; and providing contact information for organizations to get more information about Public Assistance program opportunities. We provided a draft of this report to the Secretary of the Department of Homeland Security. DHS agreed with our recommendations. DHS provided technical comments only, which we incorporated as appropriate. We also provided a draft of relevant sections of this report to the Red Cross. The Red Cross provided several technical comments that we incorporated as appropriate. After reviewing the section of this report pertaining to NVOAD, the NVOAD Board President and Executive Director agreed with our findings and recommendation regarding improving information sharing after disasters. NVOAD added that it would be in favor of FEMA providing support to implement this recommendation through its Disaster Assistance Directorate. NVOAD’s comments are reprinted in appendix V. In addition, we provided the Chairman of NCD with a draft copy of the section of this report addressing issues with coordination between FEMA and NCD under the Post-Katrina Act. NCD agreed with the report’s findings and recommendation for this section. NCD’s comments are reprinted in appendix VI. We are sending copies of this report to the Secretary of the Department of Homeland Security, the Red Cross, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (202) 512-7215 if you or your staff have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other major contributors to this report are listed in appendix IV. As part of our body of work examining the response of the federal government and others to Hurricanes Katrina and Rita, we conducted a review of various issues pertaining to the role of voluntary organizations in providing mass care services. To obtain information about the rationale for, and implications of, the shift in the primary mass care role in the National Response Framework (NRF) from the Red Cross to the Federal Emergency Management Agency (FEMA), we reviewed letters between FEMA and the Red Cross documenting reasons for the shift in the primary agency role from the Red Cross to FEMA, the National Response Framework, information about the National Shelter System, the Post Katrina Emergency Management Reform Act, and information about the responsibilities of Voluntary Agency Liaisons. We also observed a demonstration of the National Shelter System. We interviewed officials from FEMA with responsibility for ESF-6, including FEMA Voluntary Agency Liaisons (VALs) in headquarters and in the field, and from national offices of voluntary organizations, including the Red Cross, National Voluntary Organizations Active in Disaster, the Salvation Army, the United Way, America’s Second Harvest, Catholic Charities, and the Southern Baptist Convention. We also interviewed emergency management officials from a selection of states that included Louisiana, Mississippi, and nine other randomly selected states throughout the country. To obtain information about NVOAD’s efforts to coordinate with the voluntary sector, we reviewed documents about its member services, internal governance, funding, and plans for the future. We also interviewed NVOAD’s former and current executive directors, chairman of the board, officials from eight of NVOAD’s member organizations, and FEMA officials and disaster response experts who have worked with NVOAD. We also interviewed an official who manages a Web site used to coordinate disaster relief by the United Nations High Commission for Refugees, and reviewed the Web site. To obtain information about the efforts of FEMA and major national voluntary organizations to improve services for the disabled since Katrina, we reviewed the Post-Katrina Emergency Management Reform Act (the Act), the Americans with Disabilities Act (ADA), and guidance released by the Justice Department about ADA, and also and conducted document reviews with FEMA, the American Red Cross, and the Southern Baptist Convention. These included documents related to FEMA’s efforts to improve services for the disabled and respond to the Act’s requirements, such as the Target Capabilities and guidelines for accommodating individuals with disabilities. In addition, we reviewed a number of Red Cross documents related to services for individuals with disabilities, including training materials and a shelter intake form. We also interviewed officials from DHS, FEMA, the Red Cross, the Southern Baptists, Salvation Army, the United Way, and Catholic Charities, and state-level emergency managers from Mississippi, Louisiana, and Texas. Our interviews with FEMA included individuals from the various initiatives required by the Act to consult with the National Council on Disability, and FEMA’s Disability Coordinator. In addition, we interviewed officials from the National Council on Disabilities, a number of disability advocacy organizations, such as the National Spinal Cord Injury Association, and several advocacy groups for the elderly, such as the American Association of Retired Persons. We also reviewed a survey of 95 Red Cross chapters that was conducted by the Disability Relations Group, an organization that conducts survey research on disability issues. Due to several methodological limitations—for example, we could not determine the response rate to the survey—we did not cite the results of this survey in the report. To collect information about how FEMA coordinated with small voluntary organizations through the Public Assistance program, we conducted document reviews of FEMA’s Public Assistance program, including FEMA Public Assistance policies, and documentation of changes to those policies, and reviewed information about the program on FEMA’s Web site. We also interviewed FEMA officials from the Public Assistance office, and several FEMA VALs. We spoke with representatives of approximately 10 local voluntary organizations that provided services in the Gulf Coast after the hurricanes, and the Director of Long-Term Recovery for the Louisiana Association of Nonprofits—a group that works with nonprofits that applied for reimbursement. In addition, we spoke with state government officials from Louisiana, Mississippi, and Texas, officials from Baton Rouge and Houston, and several disaster response experts familiar with Public Assistance. We reviewed reports on the response to the Gulf Coast hurricanes issued by the DHS Office of Inspector General, the House of Representatives, the White House, the Senate Committee on Homeland Security and Governmental Affairs, the National Council on Disability, the Appleseed Foundation, the American Association of Retired Persons, the International Association of Assembly Managers, and the Aspen Institute. In addition, this report drew from research conducted for GAO-06-712, which was released in June 2006. For that report, we conducted site visits to Louisiana, Mississippi, and Texas. We toured damage caused by the hurricanes in New Orleans, Louisiana, and Biloxi, Mississippi. Additionally we toured the FEMA Joint Field Offices that were located in Baton Rouge, Biloxi, and Austin; local emergency operations centers in Baton Rouge and Austin; as well as distribution centers established by the Red Cross and the Salvation Army. On these site visits, we met with local chapters of the Red Cross, the Salvation Army, Catholic Charities, and the United Way. We held two additional discussion groups—one in Jackson, Mississippi, and one in Houston, Texas—to obtain the perspectives of local voluntary organizations that provided disaster relief on their efforts to be reimbursed under the Public Assistance program, and other issues. We spoke with key local emergency managers from East Baton Rouge, New Orleans, Austin, and Houston, as well as the State of Texas. We also spoke with FEMA Voluntary Agency Liaisons in Louisiana, Mississippi, and Texas. In addition, for the June 2006 report we conducted a discussion group at a Board of Directors meeting for the National Voluntary Organizations Active in Disaster that included representatives from the United Methodist Committee on Relief, America’s Second Harvest, and Lutheran Disaster Response. We also observed a National Voluntary Organizations Active in Disaster conference call in November 2005. These conference calls took place daily after the Gulf Coast hurricanes and included representatives from local and national voluntary organizations, as well as federal agencies, such as FEMA. We conducted this performance audit between January 2007 and February 2008, and work for the previous report, GAO-06-712, between October 2005 and June 2006, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Assist voluntary agencies in the development and promotion of state and local Voluntary Organizations Active in Disasters (VOAD) and other coalitions such as unmet needs/resource coordination committees for long-term recovery. Initiate and maintain a close working relationship between FEMA and voluntary agencies including soliciting participation of the voluntary agencies in preparedness activities such as training and exercises to improve response and recovery capacity. Provide technical advice to FEMA Regional and Area Offices, other federal agencies, and state emergency management officials regarding the roles and responsibilities of all VOAD members, and other voluntary agencies active in disaster and emergency situations. Assist and collaborate with other FEMA Regional and Area Offices staff, in the development and maintenance of emergency response and recovery plans to ensure that voluntary agencies’ capabilities, specifically as they relate to emergency assistance, mass shelter and feeding, donations management, and other voluntary agency disaster relief activities are recognized in the plans. Assist with the collection and dissemination of information concerning emergency incidents, including initial damage assessment, emergency response activities, and continued response and long-term recovery activities/plans of voluntary agencies. Assist and support the FEMA Individual Assistance officer on disaster operations in providing consultative support to voluntary agency leadership and encouraging collaboration among voluntary agencies. Provide or make available to the voluntary agencies information on the status of federal and state response and recovery programs and activities. Andrew Sherrill, Acting Director, and Scott Spicer, Analyst in Charge, managed this assignment and made significant contributions to all aspects of this report. Farahnaaz Khakoo and Danielle Pakdaman also made significant contributions. Additionally, Cindy Bascetta, Mallory Barg Bulman, Karen Doran, Tom James, Bill Jenkins, Gale Harris, Chuck Wilson, and Walter Vance aided in this assignment. In addition, Jessica Botsford assisted in the legal analysis, and Charlie Willson assisted in the message and report development. The American Association of Retired Persons. We Can Do Better: Lessons Learned for Protecting Older Persons in Disasters. Washington, D.C.: 2006. The Appleseed Foundation. A Continuing Storm: The Ongoing Struggles of Hurricane Katrina Evacuees. Minneapolis, Minnesota: August 2006. The Aspen Institute. Weathering the Storm: The Role of Local Nonprofits in the Hurricane Katrina Relief Effort. Washington, D.C.: 2006. Congressional Research Service. Federal Emergency Management Policy Changes after Hurricane Katrina: A Summary of Statutory Provisions. Washington, D.C.: December 2006. Congressional Research Service. Reimbursement of Local Private Nonprofit Organizations under the Stafford Act. Washington, D.C.: January 2006. Department of Homeland Security. Nationwide Plan Review: Phase II Report. Washington, D.C.: June 2006. Department of Homeland Security, Office of Inspector General. A Performance Review of FEMA’s Disaster Management Activities in Response to Hurricane Katrina. OIG-06-32. Washington, D.C.: March 2006. Federal Emergency Management Agency. DHS/FEMA Initial Response Hotwash Hurricane Katrina in Louisiana. New Orleans, Louisiana: February 2006. International Association of Assembly Managers. Mega-Shelter: Best Practices for Planning, Activation, Operations. Coppell, Texas: July 2006. National Council on Disability. The Impact of Hurricanes Katrina and Rita on People with Disabilities: A Look Back and Remaining Challenges. Washington, D.C.: Aug. 3, 2006. United States House of Representatives, Select Bipartisan Committee to Investigate the Preparation for and Response to Hurricane Katrina. A Failure of Initiative. Washington, D.C.: Feb. 15, 2006. United States Senate Committee on Homeland Security and Governmental Affairs. Hurricane Katrina: A Nation Still Unprepared. Washington, D.C.: 2006. The White House. The Federal Response to Hurricane Katrina: Lessons Learned. Washington, D.C.: February 2006. Disaster Assistance: Better Planning Needed for Housing Victims of Catastrophic Disasters. GAO-07-88. February 2007. Coast Guard: Observations on the Preparation, Response, and Recovery Missions Related to Hurricane Katrina. GAO-06-903. July 31, 2006. Child Welfare: Federal Action Needed to Ensure States Have Plans to Safeguard Children in the Child Welfare System Displaced by Disasters. GAO-06-944. July 28, 2006. Small Business Administration: Actions Needed to Provide More Timely Disaster Assistance. GAO-06-860. July 28, 2006. Disaster Preparedness: Limitations in Federal Evacuation Assistance for Health Facilities Should Be Addressed. GAO-06-826. July 20, 2006. Purchase Cards: Control Weakness Leave DHS Highly Vulnerable to Fraudulent, Improper, and Abusive Activity. GAO-06-957T. July 19, 2006. Individual Disaster Assistance Programs: Framework for Fraud Prevention, Detection, and Prosecution. GAO-06-954T. July 12, 2006. Expedited Assistance for Victims of Hurricanes Katrina and Rita: FEMA’s Control Weaknesses Exposed the Government to Significant Fraud and Abuse. GAO-06-655. June 16, 2006. Hurricanes Katrina and Rita: Improper and Potentially Fraudulent Individual Assistance Payments Estimated to Be between $600 Million and $1.4 Billion. GAO-06-844T. June 14, 2006. Hurricanes Katrina and Rita: Coordination between FEMA and the Red Cross Should Be Improved for the 2006 Hurricane Season. GAO-06-712. June 8, 2006. Lessons Learned for Protecting and Educating Children after the Gulf Coast Hurricanes. GAO-06-680R. Washington, D.C.: May 11, 2006. Hurricane Katrina: Planning for and Management of Federal Disaster Recovery Contracts. GAO-06-622T. Washington, D.C.: April 10, 2006. Hurricane Katrina: Comprehensive Policies and Procedures Are Needed to Ensure Appropriate Use of and Accountability for International Assistance. GAO-06-460. Washington, D.C.: April 6, 2006. Hurricane Katrina: Status of the Health Care System in New Orleans and Difficult Decisions Related to Efforts to Rebuild It Approximately 6 Months after Hurricane Katrina. GAO-06-576R. Washington, D.C.: March 28, 2006. Agency Management of Contractors Responding to Hurricanes Katrina and Rita. GAO-06-461R. Washington, D.C.: March 15, 2006. Hurricane Katrina: GAO’s Preliminary Observations Regarding Preparedness, Response, and Recovery. GAO-06-442T. Washington, D.C.: March 8, 2006. Emergency Preparedness and Response: Some Issues and Challenges Associated with Major Emergency Incidents. GAO-06-467T. Washington, D.C.: February 23, 2006. Disaster Preparedness: Preliminary Observations on the Evacuation of Hospitals and Nursing Homes Due to Hurricanes. GAO-06-443R. Washington, D.C.: February 16, 2006. Expedited Assistance for Victims of Hurricanes Katrina and Rita: FEMA’s Control Weaknesses Exposed the Government to Significant Fraud and Abuse. GAO-06-403T. Washington, D.C.: February 13, 2006. Investigation: Military Meals, Ready-to-Eat Sold on eBay. GAO-06-410R. Washington, D.C.: February 13, 2006. Statement by Comptroller General David M. Walker on GAO’s Preliminary Observations Regarding Preparedness and Response to Hurricanes Katrina and Rita. GAO-06-365R. Washington, D.C.: February 1, 2006. Federal Emergency Management Agency: Challenges for the National Flood Insurance Program. GAO-06-335T. Washington, D.C.: January 25, 2006. Hurricane Protection: Statutory and Regulatory Framework for Levee Maintenance and Emergency Response for the Lake Pontchartrain Project. GAO-06-322T. Washington, D.C.: December 15, 2005. Hurricanes Katrina and Rita: Provision of Charitable Assistance. GAO- 06-297T. Washington, D.C.: December 13, 2005.
Using lessons from the 2005 Gulf Coast hurricanes, the federal government released the National Response Framework (NRF) in January 2008. This report examines (1) why the primary role for mass care in the NRF shifted from the Red Cross to the Federal Emergency Management Agency (FEMA), and potential issues with implementation, (2) whether National Voluntary Organizations Active in Disasters (NVOAD)--an umbrella organization of 49 voluntary agencies--is equipped to fulfill its NRF role, (3) the extent to which FEMA has addressed issues with mass care for the disabled since the hurricanes, (4) the extent to which major voluntary agencies have prepared to better serve the disabled since the hurricanes, and (5) the extent to which FEMA has addressed issues voluntary agencies faced in receiving Public Assistance reimbursement. To analyze these issues, GAO reviewed the NRF and other documents, and interviewed officials from FEMA, voluntary agencies, and state and local governments. FEMA and the Red Cross agreed that FEMA should be the primary agency for mass care in the NRF because the primary agency should be able to direct federal agencies' resources to meet mass care needs, which the Red Cross cannot do. The shifting roles present several implementation issues. For example, while FEMA has enhanced responsibilities for coordinating the activities of voluntary organizations, it does not currently have a sufficient number of specialized staff to meet this responsibility. NVOAD has characteristics that help it carry out its broad role of facilitating voluntary organization and government coordination, but limited staff resources constrain its ability to effectively fulfill its role in disaster response situations. NVOAD held daily conference calls with its members after Hurricane Katrina, but these calls were not an effective means of sharing information, reflecting the fact that NVOAD had only one employee at the time of Katrina. FEMA has begun taking steps in several areas to improve mass care for the disabled based on lessons learned from the Gulf Coast hurricanes. For example, FEMA hired a Disability Coordinator to integrate disability issues into federal emergency planning and preparedness efforts. However, FEMA has generally not coordinated with a key federal disability agency, the National Council on Disability, in the implementation of various initiatives, as required by the Post-Katrina Emergency Management Reform Act of 2006. The Red Cross has taken steps to improve mass care services for the disabled, but still faces challenges. For example, the Red Cross developed a shelter intake form to assist staff in determining whether a particular shelter can meet an individual's needs. However, Red Cross officials said that some local chapters are still not fully prepared to serve individuals with disabilities. Other voluntary organizations had not identified a need to improve services for individuals with disabilities, and we did not identify concerns with their services. FEMA has partially addressed the issues faced by local voluntary organizations, such as churches, in seeking Public Assistance reimbursement for mass care-related expenses after the hurricanes. At the time of the hurricanes, a key FEMA reimbursement program was not designed for a disaster of Katrina's magnitude, but FEMA has changed its regulations to address this issue. Local voluntary organizations also had difficulty getting accurate information about reimbursement opportunities. Key FEMA staff had not received training on reimbursement policies and sometimes did not provide accurate information, and some of the information on FEMA's Web site was not presented in a user-friendly format. FEMA has not addressed these communication issues.
DOD draws from a large number of suppliers in a global supply chain—in both the acquisition phase and throughout a system’s operational and sustainment phases—providing multiple opportunities for the risk of counterfeit parts into these systems. DOD contractors rely on thousands of subcontractors and suppliers, including the original component manufacturers that assemble microcircuits and the mid-level manufacturers subcontracted to develop the individual subsystems that make up a complete system or supply. Once contractors deliver a system to the military services, DLA can play a critical role in its sustainment. For example, DLA is primarily responsible for logistical support for more than 2,400 weapon systems across the military services. As part of its sustainment functions, DLA provides approximately 90 percent of the military’s repair parts. Also, as systems age, products required to support them may no longer be available from original component manufacturers, original equipment manufacturers or their authorized distributors. These products could be available from independent distributors, brokers, or aftermarket manufacturers; but these suppliers often have less traceability to the original source. DOD has adopted industry standards and continues to participate in government and industry groups that develop international standards for the aerospace and automotive industry, such as the SAE International’s G-19 Committee. Specifically, in 2009 DOD adopted SAE International’s Aerospace Standard 5553 Counterfeit Electronic Parts: Avoidance, Detection, Mitigation and Disposition (AS5553) that includes definitions of the sources of supply for parts, and associated risk, which was updated in 2013 (as shown in figure 1). According to DLA officials, DLA does not use AS5553 because it is generally applied to system integrators, but uses other aerospace standards to govern its procurement of microelectronic parts from individual suppliers. These standards emphasize the importance of purchasing parts from original component manufacturers or authorized suppliers—when available—as the most effective method to avoid counterfeit parts. If purchasing a part from an independent distributor is necessary, the buyer should consider applying additional counterfeit mitigation methods, such as testing for product verification, based on the risk of the supplier and criticality of the part. Over the past 6 years, GAO, Congress, and the Department of Commerce have issued reports on the existence of counterfeit parts in the DOD supply chain. In three reports since 2010, we have identified risks and challenges associated with counterfeit parts and counterfeit prevention at both DOD and NASA, including inconsistent definitions of counterfeit parts and poorly targeted quality control practices, as well as potential barriers to improvements to these practices. In 2012, we created a fictitious company, and through it were able to report on the availability of suspect counterfeit electronic parts available for purchase from companies selling military-grade parts on the Internet. In our prior reports, we made a total of five recommendations for improvements. DOD has taken action to implement three of these recommendations, but neither DOD nor NASA have yet implemented the remaining two recommendations: on tracking the frequency with which parts with quality issues, including counterfeit parts, make their way into the supply chain; and on making that information available to Congress. In 2012, Senate investigators reported that approximately 1,800 instances of suspect counterfeit parts were identified by DLA, defense contractors and testers in the 2-year period from 2009 to 2010—before reporting suspect counterfeit parts in GIDEP became mandatory—and that the vast majority of those cases appeared to have gone unreported to DOD or criminal authorities. To enhance DOD’s efforts to detect and avoid counterfeit electronic parts, Section 818 of the 2012 National Defense Authorization Act directed DOD to define suspect and confirmed counterfeit electronic parts, implement a risk-based approach to mitigate the risk of counterfeit electronic parts, and use GIDEP to report counterfeit incidents. It also included specific sections pertaining to DOD’s supply chain—requiring certain DOD contractors to enhance their systems to detect and avoid counterfeit electronic parts, and to report all counterfeit and suspect counterfeit electronic parts in GIDEP within 60 days. Finally, Section 818 required DOD to revise DFARS so that costs of rework or corrective action associated with a counterfeit electronic part supplied by certain contractors are not allowable under DOD contracts. Figure 2 shows the timeline of congressional and DOD actions relating to counterfeit parts from 2011 to 2014. DOD issued its Counterfeit Prevention Policy in April 2013. The policy aims to 1) prevent the introduction of counterfeit materiel at any level of the DOD supply chain, including electronic parts; and 2) provide direction for anti-counterfeit measures for DOD weapon and information systems acquisition and sustainment to prevent the introduction of counterfeit materiel. While the instructions in Section 818 to DOD are specifically applied to counterfeit electronic parts, the policy applies to all counterfeit materiel, not just electronic parts. DOD’s Counterfeit Prevention Policy provides the following definitions for counterfeit items: Counterfeit materiel: an item that is an unauthorized copy or substitute that has been identified, marked, or altered by a source other than the item’s legally authorized source and has been misrepresented to be an authorized item of the legally authorized source. Suspect counterfeit: materiel, items, or products in which there is an indication by visual inspection, testing, or other information that it may meet the definition of counterfeit materiel. The Counterfeit Prevention Policy established roles and responsibilities for implementing DOD’s anti-counterfeiting strategy as well as GIDEP reporting for counterfeit parts. Three offices within the Under Secretary of Defense for Acquisition, Technology, and Logistics (USD AT&L) have primary responsibility for counterfeit parts. First, the Assistant Secretary of Defense for Logistics and Materiel Readiness is designated as the primary point of contact office with the primary responsibility to implement, monitor, and continually develop DOD’s anti-counterfeit strategy. Second, the Assistant Secretary of Defense for Research and Engineering, among other responsibilities, acts as the principal point of contact for GIDEP and is to determine and implement enhancements to GIDEP to expand its usefulness and robustness in anti-counterfeiting efforts in the DOD supply chain. Finally, the Director of Defense Procurement, Acquisition Policy, and Strategic Sourcing develops and modifies procurement policies, procedures, regulations, and guidance to support DOD’s Counterfeit Prevention Policy. DOD’s Counterfeit Prevention Policy requires DOD component heads to report all occurrences of suspect and confirmed counterfeit parts in GIDEP, DOD’s central reporting repository for suspect or confirmed counterfeit parts. Managed by DOD’s Defense Standardization Program Office, GIDEP manages a web-based program that allows government and industry participants to share information on nonconforming parts, including but not limited to counterfeit parts (confirmed and suspected). Other types of information reported by GIDEP includes notices for when production of a part is about to be discontinued or when the attributes of parts, components, or materials have been changed by a manufacturer. A part that is found to be nonconforming is not necessarily counterfeit as counterfeit parts involve the intent to misrepresent the identity or pedigree of a part. DOD also uses the term “deficient” to have the same meaning as “nonconforming.” The Policy requires the reporting of all occurrences of suspect and confirmed counterfeit materiel to (1) appropriate authorities, nonconformance reporting systems, and GIDEP within 60 calendar days; and (2) DOD criminal investigative organizations and other DOD law enforcement authorities at the earliest opportunity. It further states that when critical materiel is identified as suspect counterfeit, to expeditiously disseminate a notification to other DOD components to maintain weapon systems operational performance and preserve life or safety of operating personnel. According to several DOD officials we spoke with, GIDEP is intended to be an early warning system. DOD military services and components also use two other systems to report nonconforming parts—the Product Data Reporting and Evaluation Program (PDREP) and the Joint Deficiency Reporting System. In both systems, users can specifically categorize reported nonconforming parts as suspect counterfeit. As DOD’s Counterfeit Prevention Policy mandates documenting all occurrences of suspect counterfeit parts in GIDEP, entries into these other systems do not fulfill the DOD reporting requirement. In May 2014, DOD revised the DFARS to require that contractors subject to cost accounting standards, when delivering electronic parts or supplies containing electronic parts, 1) report suspect and confirmed counterfeit electronic parts in GIDEP; and 2) have systems in place to detect and avoid counterfeit electronic parts. Additionally, the DFARS requires that prime contractors subject to the cost accounting standards flow down these requirements to their subcontractors, regardless of whether those subcontractors are subject to the cost accounting standards. Prime contractors not subject to the cost accounting standards are not required to apply or flow down these requirements. The new counterfeit prevention policies supplement long-standing FAR contract quality requirements. Defense contractors and agencies are submitting counterfeit parts reports, but fewer reports have been submitted to GIDEP since DOD implemented its Counterfeit Prevention Policy and reporting requirements in 2013. For fiscal years 2011 through 2015, we found that 526 reports of suspect counterfeit parts were entered in GIDEP, over 90 percent of which were submitted by contractors. Figure 3 shows the number of reports submitted by contractors and government agencies in each fiscal year. Most of these reports were submitted in 2011 and 2012, when some DOD and contractor officials we spoke with said that congressional attention to counterfeit parts prompted contractors to examine their inventory and identify previously undetected counterfeit parts. In addition, there was an amnesty period in early fiscal year 2011 when suspect counterfeit parts reports could be submitted without naming a supplier, which DOD officials said led to temporarily increased reporting, mostly from distributors who have submitted few reports since. In more recent years, defense agencies and contractors we met with stated that they have encountered counterfeit parts less frequently in the DOD supply chain, in part, because they are applying more stringent standards about which independent distributors they rely on for parts that cannot be acquired directly from the original manufacturer. While the names of the suppliers can be identified in GIDEP reports, almost half of 526 GIDEP reports in our analysis did not include the name of the supplier for the parts in question. Further, the reports do not always indicate the original source from whom the supplier purchased the counterfeit part, which could be further down the supply chain and may or may not be known by the entity submitting the report. At our request, GIDEP staff categorized the suppliers identified in counterfeit parts reports issued in fiscal years 2011 through 2015 by their role in the supply chain, based on their personal knowledge and industry expertise, and we conducted analysis based on these classifications. In the 296 reports that contained supplier information, 319 unique suppliers were named. Of these, 88 percent were classified by GIDEP staff as independent distributors and 10 percent were classified as midlevel manufacturers. One independent distributor was named in 30 different GIDEP reports, all of which were submitted by one original equipment manufacturer within a 7-month period. GIDEP staff also provided classifications for the entities that submitted GIDEP reports by their role in the supply chain, upon which we based our analysis. From fiscal years 2011 through 2015, we found that nearly 40 percent of suspect counterfeit parts reports—207 of 526—were submitted by independent distributors, with three companies submitting 103 reports. In addition, one-third of all suspect counterfeit GIDEP reports—178 of 526—were submitted by original equipment manufacturers, with 122 of these 178 reports submitted by two manufacturers, while government agencies submitted only 43 reports. DOD submitted 40 of 43 government reports, with the Navy submitting more than half of these. See appendix II for additional details of reports by role of reporting entity in the supply chain. The Army, the Air Force and MDA did not submit any suspect counterfeit GIDEP reports in this period. Air Force officials explained that they have relied on their contractors to submit reports because they have the best knowledge of how and where the counterfeit part was procured. Similarly, officials from the Army and MDA also said that their contractors have submitted suspect counterfeit GIDEP reports related to parts procured for Army and MDA products. Specifically, MDA officials said that their contractors submitted five of the GIDEP reports we reviewed, some of which involved parts detected due to concerns raised by MDA. DLA officials also noted that they encourage contractors and subcontractors to submit reports when counterfeit parts are encountered. However, according to DOD officials, most of DLA’s contractors are not large enough to follow cost accounting standards and therefore are not bound by the GIDEP reporting requirement in the DFARS. To address this, defense officials stated that DLA requires any company participating in one of its qualified supplier programs to report in GIDEP. Several aspects of DOD’s implementation of its mandatory reporting requirement for suspect counterfeit parts in GIDEP have limited GIDEP’s effectiveness as an early warning system to prevent counterfeit parts from entering the defense supply chain. First, DOD has not established an oversight function to ensure that defense agencies are reporting suspect counterfeit parts as required. As a result, for example, reporting practices at DLA do not conform to either DOD- or DLA-level reporting policies and it is likely that DLA is not reporting all of the suspect counterfeit parts detected in GIDEP as suspect counterfeit parts. Second, there is not a standardized process for establishing how much evidence is needed before reporting suspect counterfeit parts in GIDEP. We found that defense agencies and contractors have used different practices for determining when to report a part as suspect counterfeit and DLA applies a significantly more stringent standard than other defense agencies and contractors we reviewed. As a result, reports may not be submitted in a timely manner. Third, defense agencies typically limit access of suspect counterfeit GIDEP reports to government agencies, so industry is not aware of the potential counterfeiting issues identified. DOD’s Counterfeit Prevention Policy does not include guidance about when limiting access to suspect counterfeit parts GIDEP reports is appropriate. Standards for Internal Control in the Federal Government call for information to be recorded and communicated to others, such as stakeholders who need it, to help the agency achieve its goals. These standards also state that control activities should be in place to help ensure that management’s directives are carried out, such as ensuring completeness and accuracy of information processing. DOD has not provided adequate department-level oversight to ensure that all defense agencies are reporting in GIDEP as required and, as a result, it is likely that defense agencies—particularly DLA—are not reporting all of the suspect counterfeit parts they detect in GIDEP as suspect counterfeit. Standards for Internal Control in the Federal Government call for reviews by management at the functional or activity level to compare actual performance to planned or expected performance and analyze significant differences. Completeness and timeliness of GIDEP reporting relies on DOD ensuring that reporting practices align with established Counterfeit Prevention Policy. According to a senior USD AT&L official, GIDEP staff do not play a role in overseeing and monitoring whether defense agencies and contractors are meeting reporting requirements. DOD policy does not provide for an oversight role to ensure that reporting of counterfeit parts is tracked. The senior USD AT&L official explained that the department has taken a decentralized approach to implementing GIDEP reporting requirements, depending on the components to provide additional guidance and oversight. While defense agencies generally each have a central point person overseeing use of GIDEP, DOD does not oversee GIDEP reporting at a department-wide level. According to DOD’s Counterfeit Prevention Policy, three entities within USD AT&L share responsibilities for DOD’s anti-counterfeiting efforts. The senior USD AT&L official stated that certain GIDEP oversight functions, such as oversight of reporting by DOD agencies, may fall between the responsibilities of these organizations. Moreover, defense officials have not analyzed or provided oversight of defense agencies’ compliance with GIDEP reporting requirements, monitoring only whether agencies have established their own policies. A senior USD AT&L official responsible for counterfeit prevention policy we spoke with was not aware of DLA’s low level of reporting and has not analyzed the reasons for it in light of DLA’s central role in procuring parts for DOD. Specifically, this official said that USD AT&L has not conducted analysis that shows that DLA submitted very few reports in recent years. As a result of DOD’s decentralized approach and lack of department-level oversight, the department cannot ensure that GIDEP data accurately reflect the extent to which suspect counterfeit parts have been identified by defense agencies. DLA plays a central role in procuring parts to sustain existing weapon systems. Navy and Air Force officials we spoke with noted that they do not typically purchase parts directly from suppliers, so they would expect counterfeit parts to be reported by their defense contractors or DLA. However, DLA submitted only nine suspect counterfeit GIDEP reports in fiscal years 2011 through 2015, with none submitted in 2014 and just one in 2015. DLA officials described instances where parts were identified as potentially suspect counterfeit, but these were reported in GIDEP as nonconforming parts, not suspect counterfeit. While this step provides GIDEP users with notice that parts did not meet contract specifications and may present safety problems, it does not inform users about potential counterfeiting concerns. In another example, in 2012, the Air Force did not report a debarred subcontractor in GIDEP for supplying counterfeit electronics components, even after the investigation was made public. Although Air Force officials stated that its prime contractor submitted related suspect counterfeit GIDEP reports about some parts, these reports did not include the name of the debarred subcontractor; rather they listed only the independent distributor that the parts were sold through. Without a GIDEP report that included critical information about the original source of suspect counterfeit parts, other defense agencies and contractors may not have the information necessary to raise their awareness of the problem or to check whether other distributors also sold parts from that same source. Further, DOD officials told us that not all suspect counterfeit parts that are reported to other data sources are reported in GIDEP as suspect counterfeit. Specifically, PDREP—the Navy’s system for reporting supplier performance and quality information used across several defense agencies—allows the entity that submits a report about a nonconforming part to identify the part as suspect counterfeit. According to DOD policy, it is then the responsibility of a specific agency identified in PDREP to determine whether to report in GIDEP, which is possible through an automated function within PDREP. We found 268 PDREP reports labeled as suspect counterfeit parts by various DOD entities between October 2010 and August 2015. However, only 10—or 4 percent—are clearly documented as having been reported in GIDEP. While defense agency and contractor officials explained that there are instances where an initial suspicion of counterfeiting is quickly proven incorrect, defense officials also stated that at least some parts identified in PDREP as potentially counterfeit should be reported in GIDEP but are not. Navy officials noted that this is particularly common when DLA is responsible for resolving the claims. For example, DLA created a parts quality report in PDREP, coded the parts report as suspect counterfeit, and tested the parts at its product testing and evaluation program. The parts failed visual and dimensional test requirements, but were not reported in GIDEP as suspect counterfeit. DLA was the agency responsible for determining whether to report in GIDEP for 148 of the 268 PDREP reports we reviewed that were labeled as suspect counterfeit. However, DLA submitted only one of the related GIDEP reports we identified. In our review, we found that another source of information about suspect counterfeit parts and their suppliers, ERAI, had significantly more suspect counterfeit reports than GIDEP, further calling into question GIDEP’s completeness. ERAI—a company that monitors, investigates and reports issues affecting the global electronics supply chain—provides paying members from government and industry with access to a database with reports of nonconforming parts and their suppliers. According to ERAI, most of its members are independent brokers, but also includes original equipment manufacturers and government users. According to ERAI’s data its users report more suspect counterfeit parts than are reported in GIDEP. For example, from 2011 through 2015, over 4,000 reports of suspect counterfeit electrical, electronic, and electromechanical parts were submitted to ERAI, over 7 times the amount of suspect counterfeit reports for all types of parts submitted in GIDEP during the same period. ERAI and agency officials largely attribute this high number to the fact that reports in ERAI are submitted anonymously. While ERAI includes reports about commercial and defense industry suppliers, an ERAI official noted that both sectors often rely on the same pool of suppliers. There is no standardized process for establishing how much evidence is needed before reporting suspect counterfeit parts in GIDEP, and DLA applies a more stringent standard than other defense agencies and contractors we reviewed. When suspect counterfeit parts are discovered, we found that defense agencies and contractors generally take additional steps to establish reasonable certainty that parts are counterfeit before submitting suspect counterfeit GIDEP reports, although practices for making this determination differ and therefore take varying amounts of time. We found that some of the defense agencies and contractors we reviewed have practices for reporting parts as suspect counterfeit in GIDEP within the 60-day reporting period, but that DLA’s practices can take significantly longer to complete. According to the GIDEP operations manual, reports should be submitted no more than 60 days from the time of discovery to preclude further loss to government and industry users. In addition, the objective of GIDEP reports, including suspect counterfeit parts reports, is to preclude the integration of these items into government and industry systems and inventory. Moreover, DOD’s 2013 Counterfeit Prevention Policy states that it is DOD’s policy to make information about counterfeiting accessible at all levels of the DOD supply chain as a method to prevent further counterfeiting. DOD and industry officials noted that timely reporting of suspect counterfeit parts to GIDEP is critical to using the system as an early warning system. For example, one USD AT&L official stated that DOD’s goal for GIDEP reporting is to get information about suspect counterfeit parts out as early and as far down the supply chain as possible. However, DOD and industry officials told us they were concerned that GIDEP could not be relied upon to meet this goal if suspect counterfeit parts reports were not made available to industry in a timely and comprehensive manner. Defense agencies and contractors have varying practices for establishing reasonable certainty after a suspect counterfeit part is discovered. Some DOD officials stated that confirming whether a part is indeed counterfeit requires 1) verification by the manufacturer of that part, 2) completion of a criminal investigation, or 3) comprehensive testing that uncovers multiple strong physical counterfeit indicators. Figure 4 illustrates varying practices for determining whether to submit a GIDEP report. Some defense agencies and contractors have established practices that allow them to meet GIDEP’s 60-day reporting requirements. For example, one defense contractor told us it issues a GIDEP report as soon as it has any indication that a part may be counterfeit and another defense contractor told us it conducts routine laboratory tests on any suspect counterfeit parts, which it said can usually be completed within GIDEP’s 60-day reporting requirement. The Naval Surface Warfare Center Crane has established a standardized process for evaluating parts suspected of being counterfeit. Specifically, it conducts preliminary engineering investigations to confirm that a part is suspect counterfeit, conducts detailed analysis to calculate scores that measure how certain they are of their suspicions, and then submits GIDEP reports if appropriate. Navy officials explained that they use a scoring system that weights different types of tests and other information differently, depending on their reliability in determining whether a part is counterfeit. The scoring system totals up an overall point-value for an assessment, and officials said they report to GIDEP once the assessment reaches a certain threshold. Navy officials stated that, in general, this process can take from a week to a month, but they can generally meet GIDEP’s 60-day reporting requirement. In contrast, DLA officials said that when DLA first identifies a part as suspect counterfeit, it initially submits a GIDEP report identifying it simply as nonconforming—rather than suspect counterfeit—and with access limited to government use only. It then refers the allegation for a full criminal investigation and, if the investigation confirms that a part is counterfeit, DLA may amend or initiate a new GIDEP report that labels it as counterfeit—however, these investigations can take up to 5 to 7 years. Some defense agency officials said that early GIDEP reporting could interfere with criminal investigations and that reporting needs to wait until indictments are completed so as not to jeopardize the investigation. Officials from the Defense Criminal Investigative Service described certain instances when law enforcement activities may delay releasing suspect counterfeit GIDEP reports, including cases where a covert investigation is underway or there are activities related to a grand jury. However, they noted that these instances are uncommon and that disseminating information takes priority in the event that a suspect counterfeit part poses a health or safety risk. Defense Criminal Investigative Service officials stated that they follow DOD’s written procedures for coordination with DOD components. DLA’s practice of not reporting parts to GIDEP as suspect counterfeit until a full investigation has been completed does not align with DLA’s policies that require all instances of suspect and confirmed counterfeit parts be documented in GIDEP. According to DLA, 19,000 personnel are trained annually on DLA’s counterfeit prevention procedures. However, one DLA official we spoke with acknowledged that although he was trained about the DLA procedures that require them to report any suspect parts, he said that he disagreed with the policy and that GIDEP should only contain confirmed counterfeit parts data. Some defense contractors are reluctant to allege that a supplier has delivered counterfeit parts without establishing certainty due to concerns about damaging relationships with suppliers up to and including the possibility of being sued if their claims damage the supplier’s business. While the 2012 National Defense Authorization Act included language protecting contractors that made a reasonable effort to determine whether a part contained counterfeit or suspect counterfeit parts from civil liability for reporting, contractors we spoke with differed on the extent to which they believe those protections are adequate to protect their financial interests. Some contractors stated that they believe reporting a suspect counterfeit part in GIDEP may leave the contractor open to legal action if the part is determined to be genuine. To address similar concerns, DOD officials said GIDEP established an amnesty period in late 2010 when suspect counterfeit parts reports did not need to include the name of the supplier. Although this temporarily increased reporting, some contractor officials told us that reports without supplier information are difficult to act upon because this information is often necessary for identifying parts in their inventories. As an alternative, contractor officials said it would help alleviate these concerns if GIDEP reporting provided anonymity for the entity submitting the reports, either by having the government submit the report on their behalf or by masking the name of the submitter in the publicly released report. Air Force and GIDEP officials told us that contractors involved in developing products that will be launched or deployed into space have worked with GIDEP to establish a separate, private system for early reporting of nonconforming parts based on limited information, due to the greater risk associated with incorporating counterfeit or faulty parts in space systems. Some defense officials we spoke with noted that a tiered reporting system—for instance indicating that an early report is based on preliminary information while subsequent updates could be based on a more complete investigation—would increase comfort with reporting suspect counterfeit parts based on limited testing information. Standards for Internal Control in the Federal Government state that management should establish procedures that are effective in accomplishing agency objectives. In the absence of such procedures for determining when to submit suspect counterfeit parts reports in GIDEP, DOD is unable to ensure that the information is submitted in a timely manner, undermining GIDEP’s usefulness as an early warning system. Industry was the biggest user of suspect counterfeit part GIDEP reports issued in fiscal years 2011 through 2015, with industry users responsible for 96 percent of all suspect counterfeit GIDEP report downloads. Similarly, as noted previously, 90 percent of the reports were submitted by industry. However, industry officials expressed frustration that access to government-submitted GIDEP reports is often limited to government agencies. As a result, contractors are not able to read them and take responsive actions. We found that most of the suspect counterfeit GIDEP reports submitted by government agencies were not available to industry GIDEP participants. Specifically, 29 of 43 suspect counterfeit GIDEP reports submitted by government agencies in fiscal years 2011 through 2015 were issued with limited access—only viewable by government agencies. In addition, while DOD has other internal information systems that capture information about suspect counterfeit parts, such as PDREP and a department-wide notification system, none of these are fully available to industry participants in the supply chain. Industry officials told us that, while the quality of GIDEP reports varies, they depend on GIDEP reports because they generally include the most robust information about counterfeit parts among data sources available to them. For instance, industry officials stated that it is very helpful to know the source which supplied a counterfeit part to assess the potential impact of a counterfeit part in the supply chain, but this information is generally not available from other sources. Counterfeit parts GIDEP reports are most useful if they are made available as early as possible, so contractors can take necessary actions before they also purchase the same parts. Standards for Internal Control in the Federal Government call for information and communications to be recorded and communicated to others, such as stakeholders who need it, to help the agency achieve its goals. DOD’s Counterfeit Prevention Policy does not include guidance about when limiting access to suspect counterfeit parts GIDEP reports is appropriate. While industry officials told us that individual suspect counterfeit GIDEP reports are useful, they also said it is difficult to analyze GIDEP’s data, due to several limitations. For example, they said that the GIDEP information system is more than 15 years old and relies on antiquated technology. In addition, the system is primarily based on downloads of full documents, which limits users’ ability to search and analyze reports. According to a senior USD AT&L official, GIDEP staff conduct their own analysis, but do not disseminate all of this information outside their office. GIDEP officials are developing plans to modernize the GIDEP system to accommodate potential access by allies and foreign partners, and address these known limitations. According to the head of GIDEP, several improvements are needed, including updating the website, improving search functions, and improving the capability to extract data for analysis. However, this official stated that no formal decisions have been made as to whether to fund any of these improvements. In addition, a proposed FAR rule, if finalized, would expand the GIDEP reporting requirement to all government agencies’ contractors and would require reporting of all nonconforming parts. However, because GIDEP staff reviews each submitted report individually, concerns exist on whether GIDEP staff and technology could handle a large surge in reporting. DOD’s Counterfeit Prevention Policy depends on coordinated action by both DOD agencies and prime contractors. The DFARS requires prime contractors subject to the cost accounting standards to have anti-counterfeit systems in place; however, the guidance and criteria for DOD to assess these systems are still under development. Consequently, defense contractors have expressed uncertainty about what steps are required of them and which approaches will be deemed adequate by DOD. DOD is working with industry to develop and clarify these standards to avoid and detect counterfeit electronic parts within the defense supply chain. Until the final guidance on how DOD will assess contractors’ systems for detecting and avoiding counterfeit electronic parts is in place, DOD will be unable to fully ensure that these anti-counterfeit systems address what is required in the DFARS for counterfeit electronic parts. DOD’s Counterfeit Prevention Policy depends on coordinated action by both DOD agencies and prime contractors. Consequently, the regulations and policies lay out requirements for both public and private entities involved in defense contracting, as well as DOD’s responsibilities for overseeing these requirements. Section 818 of the National Defense Authorization Act of 2012 required DOD to implement a program to enhance contractor detection and avoidance of counterfeit electronic parts. Section 818 required the DOD program apply not only to its prime contractors subject to the cost accounting standards, but also to all their subcontractors, regardless of whether the subcontractors were subject to the cost accounting standards. DOD relies heavily on contractors to prevent the introduction of counterfeit materiel into the DOD supply chain, and oversight of these contractor programs to detect and avoid counterfeit electronic parts was delegated to the DCMA. Additionally, Section 818 deems the costs of counterfeit electronic parts and suspect counterfeit electronic parts, including any rework or corrective action required to remedy their use, unallowable, providing incentives for contractors to ensure that they detect counterfeit and suspect counterfeit electronic parts. When delegated by the contracting officer, DCMA quality assurance and contracting staff oversee a prime contractor’s purchasing systems, which can include reviews of the contractor’s counterfeit electronic part detection and avoidance system. During these reviews, DCMA staff examine 12 categories of prime contractor compliance—such as reporting and quarantining suspect counterfeit and counterfeit electronic parts— and ensure that they have effective counterfeit detection and avoidance systems. DCMA’s initial efforts to assess the status of contractors’ counterfeit detection and avoidance systems have begun to identify areas that might require increased oversight. For example, DCMA data as of fall 2015 indicate that approximately 80 percent of suppliers it reviews have processes in place for maintaining part traceability and that approximately 70 percent have processes in place for reporting and quarantining counterfeit or suspect counterfeit electronic parts. DCMA continues to incorporate compliance with counterfeit detection and avoidance in its contractor purchasing system review instruction, but has not yet reviewed any individual contracts for compliance with the r counterfeit electronic parts requirement since it was imposed in 2014. Based on our discussions with selected contractors, we found that each of the seven contractors has systems in place to detect and avoid counterfeit parts. These included actions such as: screening of GIDEP and other data sources to identify potential threats of counterfeit parts, using risk analyses to assess the appropriate level of scrutiny for a part, and narrowing the list of suppliers being treated as authorized sources of parts. For at least three of the selected contractors, these business processes predated the DFARS requirement that they have such processes. However, all seven contractors have provided some degree of input to DOD on changes to the laws or additional clarity in guidance that they would like to see. Collaboration between DOD and industry on proposed rules and policies for the detection and avoidance of counterfeit parts has played an important role in ensuring effective action on both sides. DOD has hosted numerous meetings and interactions between government and industry over the last four years concerning the 2012 National Defense Authorization Act language on counterfeit electronic parts and the development of rules and regulations surrounding it. These have included public meetings by DOD to obtain views on the rulemaking, briefings with DCMA on the adequacy of plans for the detection and avoidance of counterfeit parts, and counterfeit parts enforcement forums with Departments of Justice and Homeland Security. DOD officials stated that these meetings are valuable for crafting DOD policy and setting industry expectations. The contractors we spoke with had all participated in these interactions in some capacity, either directly or through an industry organization, and often both. Some contractors provided both positive and negative views on DOD’s engagement, but their responses generally suggest that DOD was listening to the industry, and responding as appropriate. Despite contractors’ efforts to work with DOD in developing and commenting on the rules and regulations, several have expressed concern on the lack of clear criteria on elements such as traceability and testing. They generally indicated that the lack of clear assessment criteria from DCMA on what steps prime contractors should take to meet the requirements in each of the 12 categories complicated their efforts to ensure that their counterfeit detection and avoidance systems meet DFARS requirements. For example, one contractor stated that it would like to use third-party testing of certain electronic parts, but without clear guidance from DCMA on whether this activity would meet certain counterfeit avoidance requirements and which test facilities may be approved for use, it is harder to invest in appropriate solutions. The DFARS states that DOD is to review the acceptability of contractors’ counterfeit electronic part detection and avoidance systems. However, according to DCMA officials, DCMA’s current guidance is intended to provide flexibility for prime contractors on how they can address each of the 12 categories on which they will be assessed, rather than identify specific procedures. During our review, DCMA indicated that it is revising its January 2014 instruction on contractor purchasing system reviews to include criteria for assessing counterfeit detection and avoidance systems. In addition, DCMA is updating its counterfeit mitigation instruction to address counterfeit detection and mitigation for DCMA analysts to use while conducting their reviews. Standards for Internal Control in the Federal Government state that for an entity to run and control its operations, it must have relevant, reliable, and timely communications relating to internal as well as external events. Both the instruction and the guidebook are intended to assist the DCMA workforce to adequately assess contractor performance to the requirements, but do not provide clarification for industry. In contrast to DCMA, clarification for industry on how to effectively meet the DFARS criteria has been developed elsewhere in DOD to support counterfeit detection and avoidance in high risk programs. Specifically, MDA provides a checklist to its contractors that goes into greater detail and provides clarity on what MDA will assess as an adequate counterfeit detection and avoidance system. For example, DCMA’s checklist generally asks about the flow down of counterfeit avoidance and detection requirements to subcontractors, while MDA’s checklist provides the specific steps required to verify flow down. Figure 5 contrasts DCMA and MDA’s worksheets for evaluating contractors’ counterfeit avoidance and detection systems. Without more detailed clarification on how to meet DCMA criteria, such as that presented in the MDA checklist, contractors cannot be certain how to implement systems that will pass DCMA review. Each of the seven selected contractors we met with told us, and we confirmed through selected contract review, that it was required to flow down—or ensure its subcontractors’ contracts included—the DFARS clause requiring subcontractors to have systems to detect and avoid counterfeit electronic parts. These contractors each explained their policies or processes for flowing down these requirements and told us that they use a risk-based approach to oversee subcontractors, including those at lower tiers. These risk-based approaches varied from one contractor to another, but generally involved a preference for purchasing from original part manufacturers or other reliable suppliers, for instance those authorized by the original part manufacturers, and applying greater scrutiny to parts purchased from other sources, and expecting or requiring their subcontractors to do the same. However, we found disparity on the interpretation of this DFARS clause flowing counterfeit electronic parts regulations down to subcontractors. Specifically, although three of the contractors we spoke with identified no difficulties in effectively passing down these requirements to their subcontractors, four others discussed varying degrees of resistance by their subcontractors, who believed that the DFARS clause did not apply to them. One of these contractors was more specific, noting that many of its suppliers believe that the DFARS clause only flows down to subcontractors covered by the cost accounting standards. In follow-up, the contractor stated that the contract language is generally clear about the requirements for suppliers, but that the focus on prime contractors covered by cost accounting standards can be misleading. Another contractor noted that it had experienced few changes implementing these requirements with its subcontractors, but that it believed other prime contractors and DOD program offices have interpreted the flowdown clause to require the prime to personally review the subcontractor’s plan for the detection and avoidance of counterfeit electronic parts, independent of DCMA review. In addition to confusion associated with flowing down the counterfeit electronic parts requirements to subcontractors, the contractors we spoke with raised some concerns about the coverage of the DFARS counterfeit electronic parts clause requirement. In one context, they expressed concern that gaps in the coverage of the counterfeit parts requirements might be increasing the risk of introducing counterfeit electronic parts in the DOD supply chain. Two contractors stated that the risk of counterfeit parts is largely associated with suppliers that are not covered by cost accounting standards, and that although flowing down these requirements from prime contractors addresses some of this risk, many equally risky subcontractors are suppliers to prime contractors that are not covered by cost accounting standards and therefore are not subject to the DFARS clause or its requirement to flow it down to subcontractors. However, some contractors noted that commercial suppliers, who the prime contractors consider low-risk, may refrain from working with the government because of these requirements. These contractors told us that the DFARS requirements increase the difficulty of working with commercial suppliers, for whom government contracts represent a small percentage of their overall revenue. They further stated that the costs and burdens of implementing DOD’s Counterfeit Prevention Policy, particularly for commercial-off-the-shelf items, outweigh the potential sales to government. In addition to reporting to GIDEP, DOD and the defense industry have adopted and are developing additional methods to detect and avoid counterfeit parts from entering the DOD supply chain systems. They are working to improve testing to detect counterfeit parts, implementing tools to improve the traceability of electronic parts, sharing information with other government agencies, and improving purchasing processes. These counterfeit detection efforts are critical when the option to procure parts from an authorized source is not available. DOD policies and regulations, and international standards, document the importance of detection efforts, such as testing and authenticating parts, but emphasize that purchasing parts directly from an original component manufacturer or authorized supplier, whenever possible, is the best strategy to avoid counterfeit parts. According to a few officials from the defense industry and DOD, despite the challenges in adopting effective practices and methods to detect counterfeit parts in the U.S. defense supply chain, they are ahead of other countries and international companies in addressing this issue. Industry and government are working collaboratively, as part of an international committee to develop uniform standards for testing counterfeit electronic parts. In 2010, SAE International, an organization that develops international standards for the aerospace and automotive industry, established a subcommittee to develop uniform test method standards for detecting counterfeit electrical, electronic, and electromechanical (electronic) parts. This subcommittee is part of the broader SAE International G-19 committee that previously issued broader standards addressing the risk of counterfeit parts. Representatives of the committee include officials from DOD agencies such as DLA and the Navy, defense contractors, test labs, industry groups, and academia. According to SAE International, its testing standard will include guidance for determining a part’s counterfeit risk, as well as separate documents initially addressing a combination of ten specific test methods for various types of electronic parts counterfeiting. The types of tests include external visual inspection, radiological inspection, x-ray fluorescence, and electrical testing. Once the guidance is issued, it is intended to be applied across the supply chain to include independent testing facilities, distributors and original equipment manufacturers with in-house testing capabilities, and other prime contractors or high-level subcontractors that can flow down the test requirements to their subcontractors. The committee plans to finalize the standard in 2016. The defense industry has also led efforts to evaluate and improve the quality of testing of suspect counterfeit parts performed by industry, government, and university labs. To address industry and government concerns about testing quality, one prime contractor developed a series of “round robin” tests for labs to compare and assess the quality of their testing with other labs. For the assessment, the contractor sent samples of defective parts to both the contractor’s internal testing facilities and independent labs where it outsources testing to determine their accuracy in identifying counterfeit parts. After the test results are compiled, participants receive their results along with other participants’ results for comparison, though the names of the other participants are kept confidential. The testing program has expanded to include commercial test labs, contractor in-house labs, distributor in-house labs, government labs, and university labs. The results of these evaluations of testing facilities have been presented to the G-19 committee to inform the development of its test methods standard for counterfeit parts. In addition, NASA officials said that their labs have participated in the round robin testing as part of their efforts to maximize their in-house counterfeit testing capabilities, due to a lack of confidence in external test labs. To support DOD’s counterfeit detection efforts, DLA has internal testing capabilities to detect counterfeit parts purchased across the DOD. DLA is responsible for purchasing replacement and support parts for the services, including providing over 90 percent of the military’s repair parts, and views that its counterfeit prevention efforts have a critical role in preventing counterfeit parts from entering DOD systems during operations and support phase of a system. To test these parts for quality issues and non-conformances, including testing for suspect counterfeit parts; DLA has product test centers at two locations to conduct three types of tests: mechanical; electronic; and analytical and chemical. DLA’s test centers conducts about 13,000 tests a year and have completed over 58,000 total tests from fiscal years 2011 through March 2015, of which 8,925 were specifically for electronic parts. DLA test results do not specifically categorize negative test results as suspect counterfeit, but according to DLA officials, test results may be used for further investigation, which could result in a GIDEP report or a legal action against the supplier. DLA parts testing can be initiated for multiple reasons such as responding to a field complaint or identified discrepancy, random stock sampling, targeted testing of specific vendors with no historical data or past poor performance, or testing of new vendors. DLA officials noted that the test centers have adopted new methods to address evidence of counterfeit parts. For example, a DLA electronic test center created a visual inspection checklist in December 2013 for testing microcircuits to identify defects that could indicate that the part had been previously used or marked, indicating tampering. The Naval Surface Warfare Center in Crane, Indiana is another facility leading efforts to mitigate the risk of counterfeit parts. It has been providing testing and other support for preventing counterfeit parts from entering Navy systems since 2009. Naval Surface Warfare Center Crane can perform at least 24 types of electrical and physical tests to authenticate and analyze parts to detect counterfeits and has conducted investigations on over 3,000 parts. Naval Surface Warfare Center Crane works with DOD investigative agencies, the intelligence community, and suppliers to acquire and analyze newly discovered forms of counterfeiting in order to adapt new techniques. For example, Naval Surface Warfare Center officials cited an emerging threat whereby clones—exact copies of electronic parts not supplied by the original equipment manufacturer—are being reverse-engineered from stolen intellectual property. In addition to testing parts and working to identify emerging counterfeit threats, Naval Surface Warfare Center Crane, in partnership with MDA, has performed audits and assessments of over 50 independent distributors to evaluate their capabilities to detect counterfeit parts. Figure 6 shows examples of tests to detect suspect counterfeit electronic parts. In response to a provision in the 2016 National Defense Authorization Act, DOD officials noted that Naval Surface Warfare Center Crane is also conducting an assessment of the extent to which counterfeit parts are causing field have caused failures in fielded systems. This assessment is expected to be completed in 2017. To minimize the risk of counterfeit parts entering its supply chain, DOD is implementing steps to improve its ability to trace electronic parts back to the original manufacturer and lower supply chain levels. DLA officials told us, for example, that they validate the traceability of 100 percent of their contract awards for microcircuits by applying a botanically-derived marking to all electronic microcircuits—determined to be at a high-risk for counterfeiting—that are purchased by the agency. The marking contains tracking information about the part such as the supplier, lot number, and other identification codes, which can all be retrieved with a hand scanner at any point throughout its serviceable life. DLA places the markings on the surface of the microcircuits at a single facility once it is inspected and its trace documentations authenticating its origin with the original component manufacturer are confirmed. According to DLA officials, DLA applies the marking to about 85,000 microcircuits a year, and is exploring the possibility of expanding the program to other parts that are at high-risk for counterfeiting. The Defense Advanced Research Projects Agency is also developing a system to authenticate and track electronic parts throughout the supply chain. The Supply Chain Hardware Integrity for Electronics Defense program is developing a microscopic computer chip, which unlike DLA’s marking program, will be inserted at the original source of the part and, according to contractor officials, would further strengthen authentication. The microchip will contain a unique identifier for authentication and will record the reliability of the part through the chip’s sensors and communications systems. DOD announced that the Defense Advanced Research Projects Agency awarded a development contract for the program in January 2015 and plans to transition the technology to field trials within 3 years, then to industry partners in 4 years once trials are completed. One industry official noted, however, that the success of this program depends upon the willingness of original component manufacturers to implement it. A group of federal agencies, including DOD, are working collaboratively to improve the detection and interception of counterfeit parts in the defense supply chain. Specifically, Immigration and Customs Enforcement’s Homeland Security Investigations within the Department of Homeland Security began an initiative in 2011, called Operation Chain Reaction. This initiative is led by the Department of Homeland Security’s National Intellectual Property Rights Coordination Center with a mission to align federal efforts to combat the proliferation of counterfeit goods into the DOD and federal government supply chains. Sixteen federal agencies, including the Defense Criminal Investigative Service, the military investigative services, and the DLA Office of the Inspector General, as well as the Department of Energy, the NASA Office of the Inspector General, and the U.S Customs and Border Protection are participating in the initiative. Operation Chain Reaction’s partnership has had several actions that resulted in detections and seizures of counterfeit parts, including one that resulted in the October 2015 sentencing of a man who imported thousands of counterfeit integrated circuits from China and Hong Kong to resell them to U.S. customers, including contractors supplying them to the U.S. Navy for use in nuclear submarines. Moreover, in fiscal year 2015, Operation Chain Reaction initiated a pilot program with DLA to validate its current counterfeits prevention practices. By sharing information about DLA inventory with the original manufacturers, this program helps to identify counterfeits in DLA’s current supply and evaluate newly ordered parts for authenticity. As the largest purchaser of electronic parts in DOD, DLA has developed two supplier lists for circumstances in which a part may not be available from an authorized source. In 2009, DLA responded to the risk of counterfeit electronic parts by developing the Qualified Suppliers List of Distributors for companies that sell semiconductors and microcircuits. To be listed, suppliers must meet DLA standards for traceability to the original component manufacturer and part reliability. For instances in which a DLA buyer cannot source a supplier with appropriate authentication credentials or traceability no longer exists, DLA created the Qualified Testing Suppliers List of semiconductor and microcircuit suppliers in 2012 that meet DLA-approved testing and other quality assurance standards for the parts. All listed suppliers must meet criteria established by DLA and be subject to onsite audits. Once approved for either program, participants can be subject to random site audits, and are audited on a regular basis. According to DLA officials, these audits can occur every 2 to 5 years, based on the perceived risk of the supplier. These lists have 39 and 20 suppliers respectively. A senior DLA official noted that the development of these lists has allowed DLA to limit its supplier base to certain suppliers but still provide enough suppliers for sufficient competition. The official added that if DLA cannot procure these types of parts from an original component manufacturer, authorized manufacturer, or listed supplier and has to use another distributor, then the part will be subject to product verification testing. In addition, DLA, the Navy, and the Office of the Secretary of Defense are upgrading the Past Performance Information Retrieval System, which serves as a government-wide repository of contractor past performance data, to include counterfeit parts and supplier data in order to identify procurement risk. As part of the system’s planned capabilities, it will serve as a repository for contractor and item risk assessments based on information from multiple sources including PDREP, GIDEP, product testing, and contractor suspension and debarment history. According to DLA and Navy officials we spoke with, this program, once implemented, will incorporate all the data for analyses and predict probabilities for the chance of a supplier to introduce counterfeit materials into the supply chain. The first phase of the enhancements has already been completed to allow users to identify suppliers that have been excluded or debarred for reasons, such as selling counterfeit parts, and allows agencies to flag certain high-risk parts that have been counterfeited in the past. The program is expected to be completed by early fiscal year 2018, according to a Navy official. Initially, DOD will have sole access to the new system, but according to DOD officials, future planned enhancements may include providing access to other federal agencies. The DOD supply chain is vulnerable to the risk of counterfeit parts—which can have serious consequences. To effectively identify and mitigate this risk, DOD and its defense contractors need data on the existence of counterfeit parts in their supply chain; whether those be suspected or confirmed counterfeit. Three years after GIDEP reporting became mandatory, we found evidence that this system may not be effective as an early warning system to prevent counterfeit parts from entering the supply chain. Without proper oversight to ensure the reporting requirement is consistently applied, DOD cannot depend on GIDEP data to ensure it is effectively managing the risks associated with counterfeit parts. DOD’s lack of insight into DLA’s reporting practices is particularly problematic, given DLA’s key role in procuring parts for the department. Further, without a standardized process for establishing the level of evidence needed to submit suspect counterfeit GIDEP reports, defense agencies—particularly DLA—and contractors have demonstrated a reluctance to report suspect parts, creating a delay in knowledge-sharing and an opportunity for counterfeit parts to be used in defense products. Also, DOD needs to be sure that information in GIDEP about suspect counterfeit parts is reaching industry participants whenever possible, but currently lacks necessary guidance to ensure this occurs. In addition, DOD relies on its prime contractors and subcontractors to have systems in place to detect and avoid counterfeit parts, but DOD has not yet clarified for industry the criteria by which it will assess and monitor those systems. Without providing further clarification about the criteria against which they will be evaluated, DOD cannot effectively empower its prime contractors and subcontractors to perform their critical role in consistently protecting the supply chain from counterfeit parts. Moreover, recent efforts by DOD and the defense industry to improve part traceability and testing are taking shape, but these efforts cannot be appropriately targeted to the greatest risk vulnerabilities without complete data on the existence of counterfeit parts. To provide greater compliance with the GIDEP reporting requirement among the DOD components and their defense supplier-base, we recommend that the Undersecretary of Defense for Acquisition, Technology and Logistics take the following three steps: Establish mechanisms for department-wide oversight of defense agencies’ compliance with the GIDEP reporting requirement. Develop a standardized process for determining the level of evidence needed to report a part as suspect counterfeit in GIDEP, such as a tiered reporting structure in GIDEP that provides an indication of where the suspect part is in the process of being assessed. Develop guidance for when access to GIDEP reports should be limited to only government users or made available to industry. To help DOD and contractors to have a greater degree of certainty and consistency to adhere to the requirements for contractor counterfeit detection and avoidance systems, we recommend that the Undersecretary of Defense for Acquisition, Technology and Logistics: Clarify for industry the criteria by which DOD will assess contractor counterfeit detection and avoidance systems. We provided a draft copy of this report to the Departments of Defense, Energy, Homeland Security, Justice, and Transportation; as well as the Administrator of the National Aeronautics and Space Administration for their comment. In written comments, DOD concurred with our three recommendations directed at providing greater compliance with the GIDEP reporting requirement among the DOD components and their defense supplier-base. Specifically, DOD plans to issue a new Instruction on GIDEP in fiscal year 2017, covering the identification of roles and responsibilities for submitting GIDEP reports and oversight; the level of evidence needed to report a part as suspect counterfeit in GIDEP; and the use of GIDEP, including guidance for when access to GIDEP reports should be restricted to government only. DOD partially concurred with our recommendation aimed at helping DOD and its contractors to have a greater degree of certainty and consistency to adhere to the requirements for contractor counterfeit detection and avoidance systems. Specifically, DOD stated that it agrees with informing contractors on how their counterfeit detection and avoidance systems will be assessed; however, it does not agree with prescribing specific counterfeit detection and avoidance system implementation details. We continue to believe it is important that DOD strengthen its communication with the contractors and as our recommendation indicated, for DOD to clarify the criteria by which it will assess contractor’s counterfeit detection and avoidance systems —which is different than providing specific implementation details. Standards for Internal Control in the Federal Government state that for an entity to run and control its operations, it must have relevant, reliable and timely communication related to internal and external events, which includes providing relevant and reliable criteria to contractors so that they can appropriately develop or improve their systems to detect and avoid counterfeit parts in order for them to be determined sufficient by DOD. Providing these criteria allows contractors greater visibility into DOD’s expectations. DOD also provided technical comments, which we incorporated as appropriate. DOD’s written comments are reprinted in appendix III. The Departments of Energy, Homeland Security, and the National Aeronautics and Space Administration provided technical comments, which we incorporated as appropriate. The Departments of Justice and Transportation did not provide comments for this review. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Defense, Energy, Homeland Security, and Transportation; the Attorney General of the United States; the Administrator of the National Aeronautics and Space Administration; the Under Secretary of Defense for Acquisition, Technology, and Logistics; and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or by e-mail at makm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. The report focuses on reporting of counterfeit parts and the detection and avoidance of counterfeit parts in the Department of Defense (DOD) supply chain. Specifically, our objectives were to determine (1) the use of the Government-Industry Data Exchange Program (GIDEP) to report suspect counterfeit parts, from fiscal years 2011 through 2015; (2) the effectiveness of GIDEP reporting as an early warning system for counterfeit parts; (3) the extent to which DOD has assessed defense contractors’ systems for detecting and avoiding counterfeit parts; and (4) key ongoing efforts by selected government and industry organizations to improve the detection and reporting of counterfeit or suspect counterfeit parts. We met with DOD officials and reviewed counterfeit mitigation policies and procedures from the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics (USD AT&L) Logistic and Materiel Readiness, Supply Chain Integration and USD AT&L Defense Procurement and Acquisition Policy, as well as the military services and other DOD components including the Departments of the Army, Navy, and Air Force, the Missile Defense Agency (MDA), Defense Logistics Agency (DLA), Defense Contract Management Agency (DCMA), and the Defense Criminal Investigative Service. We then assessed DOD’s policies, procedures and practices against criteria in Standards for Internal Control in the Federal Government. To determine use of GIDEP to report suspect counterfeit parts over the last 5 fiscal years, we obtained the complete GIDEP database for reports entered between October 1, 2010 and September 30, 2015. We analyzed the data to identify GIDEP reports that were categorized as suspect counterfeit and determine trends in reporting by fiscal year and across entities who submitted the reports. We assessed GIDEP by reviewing documentation and meeting with GIDEP officials, and determined that the data were sufficiently reliable for our purposes. To understand the trends in GIDEP reporting, we interviewed Air Force, Army, DCMA, DLA, MDA, and Navy officials as well as representatives from selected defense contractors and industry associations. To assess the effectiveness of GIDEP reporting as an early warning system for counterfeit parts, we interviewed Air Force, Army, USD AT&L, DCMA, DLA, MDA, and Navy officials as well as representatives from selected defense contractors and industry associations. In addition, we analyzed data in DOD’s Product Data Reporting and Evaluation Program (PDREP) submitted between October 2010 and August 2015, the most complete data available when we conducted this analysis, to identify product quality reports coded as suspect counterfeit and assess the extent to which these reports overlapped with GIDEP suspect counterfeit reports. We assessed the PDREP data by reviewing documentation and meeting with PDREP officials, and determined that the data were sufficiently reliable for our purposes. Further, we met with officials from the Defense Criminal Investigative Service and the Department of Justice to discuss how ongoing criminal cases may impact timely GIDEP reporting. To assess the extent to which DOD has assessed defense contractors’ systems for detecting and avoiding counterfeit parts, we reviewed Section 818 of the 2012 National Defense Authorization Act, the Federal Acquisition Regulation (FAR), and the Defense Federal Acquisition Regulation Supplement (DFARS) related to detecting, reporting, and mitigating counterfeit electronic parts in the DOD supply chain by defense contractors. We reviewed documents and spoke with officials at DCMA regarding DCMA’s process and criteria for determining the sufficiency of contractors’ systems to detect and avoid counterfeit electronic parts. We interviewed seven major defense contractors with awards containing DFARS counterfeit electronic parts language to discuss and examine their policies to detect and avoid counterfeit parts—BAE Systems, Boeing, General Dynamics, Lockheed Martin, Northrop Grumman, Raytheon, and Sikorsky Aircraft. To select these contractors, we obtained data from Defense Procurement Acquisition Policy identifying all 2014 DOD awards and contract actions containing the DFARS counterfeit electronic parts language and selected the five contractors with the largest dollar value of such actions, as well as two other contractors with smaller, but still significant, total volume. Additionally, for each of these contractors, we non-judgmentally selected one contract from the 2014 dataset, covering a range of award values and products and services, to examine how DOD counterfeit parts requirements for contractors are applied in a variety of situations. In addition, we met with industry associations representing companies from various levels of the defense industry supply chain, including the Aerospace Industry Association, Semiconductor Industry Association, and the Independent Distributors of Electronics Association to determine how and to what extent they worked with DOD to implement to federal regulations for counterfeit mitigation and the impact of regulations related to the detection and avoidance of counterfeit electronic parts To identify key ongoing efforts by selected government and industry organizations to improve the detection and reporting of counterfeit or suspect counterfeit parts, we reviewed documents and data and contacted officials from Defense agencies, including Defense Advanced Research Products Agency, DLA Headquarters, and DLA Land and Maritime, as well as other government agencies, such as the National Aeronautics and Space Administration, the Department of Energy, Department of Homeland Security, and the Department of Transportation. We also obtained documents and met with representatives from SAE International and the G-19 Counterfeit Electronic Parts Committee to gain an understanding of the standards and practices being developed to detect and avoid counterfeit parts. We also met with selected defense contractors to discuss actions taken to improve their practices to detect and avoid counterfeit parts, as well as reviewed data and interviewed a representative from ERAI, related to the reporting of potential counterfeit parts. In addition, we visited the product testing facilities at DLA Land and Maritime in Columbus, Ohio and the Naval Surface Warfare Center in Crane, Indiana. In addition, we met with representatives from the Center for Advanced Life Cycle Engineering and attended a symposium about counterfeit parts and materials organized by the Center for Advanced Life Cycle Engineering and the Surface Mount Technology Association in College Park, MD. We conducted this performance audit from January 2015 to February 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Number of Government-Industry Data Exchange Program (GIDEP) Reports by Role in Supply Chain of Entity Submitting Report (Fiscal Years 2011-2015) Marie A. Mak, (202) 512-4841, or MakM@gao.gov. In addition to the contact named above, Lisa Gardner (Assistant Director), Virginia Chanley, Alexandra Dew Silva, Cynthia Grant, Kurt Gurka, Stephanie Gustafson, Ashley Orr, Scott Purdy, Matt Shaffer, Roxanna Sun, and Robert Swierczek made key contributions to this report.
The DOD supply chain is vulnerable to the risk of counterfeit parts, which have the potential to delay missions and ultimately endanger service members. To effectively identify and mitigate this risk, DOD began requiring its agencies in 2013 and its contractors in 2014, to report data on suspect counterfeit parts. A Senate report included a provision for GAO to review DOD's efforts to secure its supply chain from counterfeit parts. This report examines, among other things, (1) the use of GIDEP to report counterfeits, (2) GIDEP's effectiveness as an early warning system, and (3) DOD's assessment of defense contractors' systems for detecting and avoiding counterfeits. GAO analyzed data from GIDEP for fiscal years 2011 through 2015; reviewed DOD policies, procedures, and documents; and met with agency officials and seven selected contractors based on dollar value from contracts that included a new counterfeit clause. Department of Defense (DOD) agencies and contractors submitted 526 suspect counterfeit parts reports in the Government-Industry Data Exchange Program (GIDEP) from fiscal years 2011 through 2015. These were submitted primarily by contractors. Defense agencies and contractor officials explained that congressional attention to counterfeit parts in 2011 and 2012 led to increased reporting, and that the lower number of reports in more recent years is partly the result of better practices to prevent the purchase of counterfeit parts. Number of Suspect Counterfeit Reports for Fiscal Years 2011–2015 Several aspects of DOD's implementation of its mandatory GIDEP reporting for suspect counterfeit parts have limited GIDEP's effectiveness as an early warning system. First, DOD is not conducting oversight to ensure that defense agencies are reporting as required. As a result, the Defense Logistics Agency (DLA), for example, may be underreporting suspect counterfeit parts in GIDEP. Second, there is no standardized process for establishing how much evidence is needed before reporting suspect counterfeit parts in GIDEP and DLA applies a significantly more stringent standard than, for example, the Navy. Consequently, reports may not be submitted in a timely manner. Third, defense agencies typically limit access of suspect counterfeit GIDEP reports to government agencies, so industry is not aware of the potential counterfeiting issues identified. DOD policy does not include guidance about when access to these reports should be limited. All seven contractors GAO spoke with have established systems to detect and avoid counterfeit electronic parts; however, DOD has not finalized how these systems will be assessed. Contractors are seeking additional clarification on how to meet some of DOD's requirements. Until DOD clarifies criteria for contractors on how their systems will be evaluated, it cannot fully ensure these systems detect and avoid electronic counterfeit parts, as required. GAO recommends that DOD oversee its defense agencies' reporting efforts, develop standard processes for when to report a part as suspect counterfeit, establish guidance for when to limit access tor GIDEP reports, and clarify criteria to contractors for their detection systems. DOD agreed with the 3 recommendations on GIDEP reporting, but partially agreed with the recommendation to clarify criteria, stating it did not agree with providing specific implementation details. GAO continues to believe clarifying criteria is important, which is different than specific implementation details.
A number of areas on the President’s Management Agenda are consistent with issues highlighted by our work on the High Risk Program, our annual reports on fragmentation, overlap, and duplication, and other work related to long-standing management challenges. Over the years, we have made hundreds of recommendations to address these issues. The current and prior administrations have taken actions to address many of these recommendations, and have made progress in many areas. Much more, however, remains to be done. Lasting solutions to remaining issues offer the potential to save billions of dollars, dramatically improve service to the American public, and strengthen public confidence and trust in the performance and accountability of our national government. Examples of where the President’s Management Agenda and our work are consistent include: Using information technology (IT) to better manage for results. The government invests about $80 billion annually in IT. Improving the transparency of about 700 major IT investments with the IT Dashboard can help focus attention on troubled projects. In addition, holding executive reviews, known as TechStat sessions, of selected investments that are not producing results has resulted in positive outcomes such as accelerated delivery, reduced scope, and termination. We have made recommendations to improve the accuracy and use of the IT Dashboard and for the Office of Management and Budget (OMB) and agencies to hold more TechStat sessions. OMB has generally concurred with our Dashboard recommendations and has taken actions such as improving the accuracy of the reported investment cost and schedule data. OMB also agreed with our recommendation to hold more TechStat sessions and stated that OMB and the agencies were taking appropriate steps to meet that recommendation. Other IT initiatives such as PortfolioStat and Data Center Consolidation can eliminate duplicative investments and close hundreds of centers, resulting in billions in savings. For example, we recently reported that the PortfolioStat initiative has the potential to save between $5.8 and $7.9 billion. We have made multiple recommendations to OMB and agencies to more fully implement and report on eliminating duplicative and inefficient IT investments. OMB agreed with some of these recommendations and subsequently clarified its guidance on how agencies should identify potentially duplicative investments. Further, agencies have also generally agreed with our recommendations and taken steps such as conducting portfolio reviews to identify duplicative investments and report those results via the IT Dashboard. Addressing improper payments. The federal government serves as the steward of taxpayer dollars and should safeguard them against improper payments. The President’s Management Agenda is consistent with our prior reporting that predictive analytic technologies can help agencies better identify and prevent improper payments. Further, OMB reported that it plans to develop more detailed categories of improper payments, which can help agencies tailor corrective action plans to better address the root causes of improper payments. In fiscal year 2013, estimated governmentwide improper payments totaled approximately $106 billion; however, this may not cover the full extent of improper payments throughout the federal government. In order to determine the full extent of improper payments governmentwide and to more effectively reduce and recover them, continued attention is needed to (1) adopt sound risk assessment and improper payment estimation methodologies and (2) develop effective corrective action plans and preventive and detective controls to address the root causes of improper payments. Expanding strategic sourcing. One area that could yield significant cost savings is the expanded use of strategic sourcing, a process that moves away from numerous individual procurements to a broader aggregate approach. Our work has found that federal agencies could better leverage their buying power and achieve additional savings by directing more procurement spending to existing strategic sourcing contracts and further expanding strategic sourcing practices to their highest spending procurement categories. For example, most agencies’ efforts do not address their highest spending areas such as services. We estimated that savings of one percent from selected large agencies’ procurement spending alone would equate to over $4 billion. In that regard, the President’s Management Agenda calls on federal agencies to expand the use of strategic sourcing to better leverage the government’s buying power and reduce contract duplication. It did not, however, lay out specific governmentwide metrics or savings goals. We had previously recommended that OMB establish additional metrics to measure progress toward goals. OMB has efforts underway to address this recommendation. Strengthening strategic human capital management. Consistent with the President’s Management Agenda goal to attract and retain a talented workforce, foster a culture of excellence, and invest in the Senior Executive Service (SES), we have reported that addressing complex challenges such as homeland security, economic stability, and other national priorities requires a high-quality workforce able to work seamlessly with other agencies, levels of government, and across sectors. Strategic human capital management has been on our High Risk List since 2001. Since then, as a result of actions taken by Congress, the Office of Personnel Management, and individual agencies, important progress has been made. Still, additional efforts are needed in such areas as human capital planning, building results-oriented cultures, and talent management, such as (1) addressing government-wide and agency- specific skill gaps and enhancing workforce diversity, (2) strengthening performance management systems to improve the “line of sight” between individual performance and organizational outcomes, and (3) fully assessing the costs and benefits of SES training. Improving the Department of Defense’s (DOD) weapons systems and services acquisition. The President’s Management Agenda is consistent with our findings and recommendations on improving the Department of Defense’s (DOD) acquisition of weapon systems and services, issues that have been on GAO’s High Risk List since the 1990s. DOD has made some progress in this area. Over the past several years it has decreased the size of its major defense acquisition program portfolio as well as its estimated total cost; however, programs continue to experience cost growth over time. DOD has launched its “Better Buying Power” initiatives to achieve more efficiency and reduce cost growth. We have tracked implementation of some of these initiatives and found that DOD has largely been successful in implementing its “should-cost” effort to lower contract prices during negotiations and has reported near-term cost savings as a result. DOD has had less success in implementing affordability constraints—which limit a program’s total cost throughout its lifecycle—an initiative that has the potential for long-term savings if implemented effectively. Similarly, we have found that DOD has made mixed progress in improving its acquisition of services. DOD leadership has demonstrated a commitment to improving service acquisitions and management, but the department’s efforts are hindered, in part, by limited knowledge and baseline data on the current state of service acquisitions and the absence of goals and metrics to assess its progress. We have ongoing reviews to help improve the efficiency of DOD’s weapon system acquisition process and the effectiveness of its portfolio management practices that we believe will further the administration’s and Congress’ efforts in this area. Lasting success in addressing the difficult and longstanding issues on the President’s Management Agenda will hinge on effective implementation, including sustained top leadership attention. For example, our work has shown that there are five key factors that are essential to resolving high- risk issues: 1. a demonstrated strong commitment to, and top leadership support for, 2. the capacity to address problems; 3. a corrective action plan; 4. a program to monitor corrective measures; and 5. demonstrated progress in implementing corrective measures. Top administration officials have continued to show their commitment to ensuring that significant management challenges, including those on the High Risk List, receive attention and oversight. OMB regularly convenes meetings for agencies to provide progress updates on high-risk issues. GAO and OMB have agreed to hold a series of meetings on the issues on GAO’s High Risk List. The purposes of these meetings are to discuss progress achieved and specifically focus on actions that are needed to fully address high-risk issues and ultimately remove them from the list. These meetings typically include OMB’s Deputy Director for Management, agency leaders, as well as myself and have provided a useful forum for constructive and productive dialogues. The President’s Management Agenda also commits to making continued progress in managing for results. In that regard, our work has shown that progress has been made in implementing the GPRA Modernization Act of 2010 (GPRAMA). For example, the executive branch has taken a number of steps to implement key provisions of GPRAMA. The Office of Management and Budget (OMB) has developed cross-agency priority goals, and agencies developed agency priority goals. Agency officials reported that their agencies have assigned performance management leadership roles and responsibilities to officials who generally participate in performance management activities, including quarterly performance reviews for agency priority goals. Further, OMB developed Performance.gov, a government-wide website, which provides quarterly updates on cross-agency priority goals and agency priority goals. While the building blocks needed for implementation are being put in place, much more needs to be done before the provisions of the act are fully useful to decision makers as shown in the following examples. Executive branch efforts to address crosscutting issues are hampered by the lack of a comprehensive list of programs—a key requirement of the act. As we have noted, such a list is critical for aligning federal government efforts for identifying potential fragmentation, overlap, or duplication among federal programs or activities. GPRAMA requires OMB to compile and make publicly available a comprehensive list of all federal programs identified by agencies, and to include the purposes of each program, how it contributes to the agency’s mission, and recent funding information. OMB began implementing this provision by directing 24 large federal agencies to develop and publish inventories of their programs in May 2013. Our preliminary review of these initial inventories identified concerns about the usefulness of the information being developed and the extent to which it might be able to assist executive branch and congressional efforts to identify and address fragmentation, overlap, and duplication. OMB’s guidance for developing the inventories provided agencies with flexibility to define their programs, such as by outcomes, customers, products/services, organizational structure, and budget structure. As a result, agencies took various approaches to define their programs—with many using their budget structure while others used different approaches such as identifying programs by related outcomes or customer focus. The variation in definitions across agencies will limit comparability among like programs. In addition, as reported in our annual reports on fragmentation, overlap and duplication, we have found that federal budget and cost information is often not available or not sufficiently reliable to identify the level of funding provided to programs or activities. For example, agencies could not isolate budgetary information for some programs because the data were aggregated at higher levels. OMB identified 12 different program types (e.g., block grants, regulatory, credit) for agencies to assign to their programs; however, the list of program types does not include tax expenditures, which represent a substantial federal commitment. OMB does not yet have definitive plans on when this effort will be expanded beyond the current 24 agencies to cover all other agencies and programs. We plan to further explore these issues and report on potential ways that the federal program inventory might be improved going forward later this spring. Collaboration across agencies, levels of government, or sectors is fundamental to addressing many high-risk issues and reducing fragmentation, overlap, and duplication. In one example, we have noted that better coordination among the more than 30 federal agencies that collect, maintain, and use geospatial information could help reduce duplication of investments and provide the opportunity for potential savings of millions of dollars. In another example, the Department of Veterans Affairs and DOD operate two of the nation’s largest health care systems, together providing health care to nearly 16 million veterans, service members, military retirees, and other beneficiaries at estimated costs for fiscal year 2013 of about $53 billion and $49 billion, respectively. As part of their health care efforts, the departments have established collaboration sites—locations where the two departments share health care resources through hundreds of agreements and projects—to deliver care jointly with the aim of improving access, quality, and cost-effectiveness of care. However, we found that the departments do not have a fully developed and formalized process for systematically identifying all opportunities for new or enhanced collaboration, potentially missing opportunities to improve health care access and quality, and reduce costs. Many collaborative mechanisms, such as interagency groups and specially created interagency offices, do not operate as effectively as they could. These mechanisms face challenges with issues such as identifying a common outcome and managing resources across agency lines. Our work has found practices and corresponding effective implementation approaches that collaborative mechanisms have used to work effectively across agency lines. For example, we have found practices and approaches such as agreeing on roles and responsibilities, with corresponding accountability for both the agency and the individual participants, creating an inventory of agency resources dedicated towards interagency outcomes, developing outcomes that represent the collective interests of participants, and developing performance measures that are tied to shared outcomes, can help enhance and sustain collaboration. OMB’s 2013 guidance implementing GPRAMA directs agencies, beginning in 2014, to conduct annual reviews of progress towards strategic objectives—the outcomes or impacts the agency is intending to achieve. Agency leaders are responsible for assessing progress on each strategic objective established in the agency’s strategic plan. Effective implementation could help identify and address fragmentation, overlap, and duplication issues because as part of the strategic reviews, agencies are to identify the various organizations, programs, regulations, tax expenditures, policies, and other activities that contribute to each objective both within and outside the agency. Where progress in achieving an objective is lagging, the reviews are intended to identify strategies for improvement, such as strengthening collaboration to better address crosscutting challenges. If successfully implemented in a way that is open, inclusive, and transparent—to Congress, delivery partners, and a full range of stakeholders—this approach could help decision makers assess the relative contributions of various programs that contribute to a given objective. Successful strategic reviews could also help decision makers identify and assess the interplay of public policy tools that are being used, to ensure that those tools are effective and mutually reinforcing, and results are being efficiently achieved. Our annual reports on fragmentation, overlap and duplication have also highlighted several instances in which executive branch agencies do not collect necessary performance data. In an example from our 2011 annual report, we noted that a lack of information on program outcomes for economic development, where four agencies administer 80 programs, was a longstanding problem. We suggested that the four agencies—the Departments of Commerce, Housing and Urban Development, and Agriculture and the Small Business Administration—collect accurate and complete information on program outcomes. As of March 2013, the four agencies had taken actions to begin to collect better data on program performance. Moreover, our June 2013 report on GPRAMA implementation found that agencies continue to face long-standing issues with measuring performance, such as obtaining complete, timely, and accurate performance information across various programs and activities. In one example, we reported in June 2013 on two Federal Emergency Management Agency (FEMA) grant programs that collect performance information and feed the resulting data into a higher-level Department of Homeland Security (DHS) goal. We found that data were self-reported by recipients and FEMA had varied and inconsistent approaches to verifying and validating the data. We recommended that FEMA ensure that there are consistent procedures in place to verify and validate grant performance data. DHS, of which FEMA is a part, concurred with the recommendation. Given the Performance Improvement Council’s responsibilities for addressing crosscutting performance issues and sharing performance improvement practices, our June 2013 report noted that it could do more to examine and address the difficulties agencies face to measuring performance across various program types, such as grants and contracts. We recommended that OMB work with the Performance Improvement Council to develop a detailed approach for addressing these long- standing performance measurement issues. OMB staff agreed with this recommendation. Even in instances where agencies are collecting performance information, our periodic surveys of federal managers between 1997 and 2013 have found little improvement in managers’ reported use of performance information to improve results. However, agencies’ quarterly performance reviews of progress on their priority goals—which began at most agencies in 2011 under GPRAMA—show promise as a leadership strategy for improving the use of performance information in agencies. Of the 12 percent of federal managers who both responded to our survey and reported they were very familiar with these reviews, 76 percent agreed that their top leadership demonstrated a strong commitment to using performance information to guide decision making to a great or very great extent. In addition, according to our 2012 survey of performance improvement officers at 24 agencies, the majority (21 out of 24 agencies required to conduct these reviews) reported that actionable opportunities for performance improvement were identified through the reviews at least half the time. To operate as effectively and efficiently as possible and to make difficult decisions to address the federal government’s fiscal challenges, Congress, the administration, and federal managers must have ready access to reliable and complete financial and performance information— both for individual federal entities and for the federal government as a whole. Overall, significant progress has been made since the enactment of key federal financial management reforms in the 1990s; however, our February 2014 report on the U.S. government’s consolidated financial statements underscores that much work remains to improve federal financial management, and these improvements are urgently needed. In that report, we concluded that certain material weaknesses in internal control over financial reporting and other limitations on the scope of our work resulted in conditions that prevented us from expressing an opinion on the accrual-based consolidated financial statements as of and for the fiscal years ended September 30, 2013, and 2012. Three major impediments prevented us from rendering an opinion on the federal government’s accrual-based consolidated financial statements: serious financial management problems at DOD that have prevented its financial statements from being auditable — about 33 percent of the federal government’s reported total assets as of September 30, 2013, and approximately 16 percent of the federal government’s reported net cost for fiscal year 2013 relate to DOD, which received a disclaimer of opinion on its consolidated financial statements, the federal government’s inability to adequately account for and reconcile intragovernmental activity and balances between federal entities, and the federal government’s ineffective process for preparing the consolidated financial statements. In addition to the material weaknesses underlying the three major impediments, we identified other material weaknesses which resulted in ineffective internal control over financial reporting for fiscal year 2013. These weaknesses are the federal government’s inability to determine the full extent to which improper payments occur and reasonably assure that appropriate actions are taken to reduce them, identify and timely resolve information security control deficiencies and manage information security risks on an ongoing basis, and effectively manage its tax collection activities. There are also risks that certain factors could affect the federal government’s financial condition in the future, including the following: The U.S. Postal Service (USPS) is facing a deteriorating financial situation with a lack of liquidity as it has reached its borrowing limit of $15 billion and finished fiscal year 2013 with a reported net loss of $5 billion. The Federal Housing Administration’s (FHA) mortgage insurance portfolio continues to grow, and its insurance fund has experienced major financial difficulties. FHA’s capital ratio for its Mutual Mortgage Insurance Fund remained below the required 2 percent level as of the end of fiscal year 2013. The ultimate roles of the Federal National Mortgage Association (Fannie Mae) and the Federal Home Loan Mortgage Corporation (Freddie Mac) in the mortgage market may further affect FHA’s financial condition. The Pension Benefit Guaranty Corporation’s (PBGC) financial future is uncertain because of long-term challenges related to PBGC’s governance and funding structure. PBGC’s liabilities exceeded its assets by about $36 billion as of September 30, 2013. PBGC reported that it is subject to further losses if plan terminations that are reasonably possible occur. GAO’s High Risk List includes several of these issues, such as information security, USPS’s business model, DOD financial management, and the PBGC and FHA insurance programs. Increased attention to risks that could affect the federal government’s financial condition is made more important because of the nation’s longer- term fiscal challenges. The administration’s long-term fiscal projections— and our own long-term federal fiscal simulations—show that, absent policy changes, the federal government continues to face an unsustainable long-term fiscal path. The oldest members of the baby- boom generation are already eligible for Social Security retirement benefits and for Medicare benefits. Under the administration’s projections—and our simulations—spending for the major health and retirement programs will increase in coming decades as more members of the baby-boom generation become eligible for benefits and the health care cost for each enrollee increases. Over the long term, the imbalance between revenue and spending built into current law and policy will lead to continued growth of debt held by the public as a share of Gross Domestic Product (GDP). This situation—in which debt grows faster than GDP—means the current federal fiscal path is unsustainable. Reliable financial and performance information is even more critical as (1) federal managers likely face increasingly tight budget constraints and need to operate their respective entities as efficiently and effectively as possible and (2) decision makers carry out the important task of deciding how to use multiple tools (tax provisions, discretionary spending, mandatory spending, and credit programs) to address the federal government’s fiscal challenges. Similarly ongoing attention is needed to address issues identified in our annual reports on fragmentation, overlap, duplication, and potential cost savings and revenue enhancements. Of the 162 areas that we have identified in our annual reports, 19 (12 percent) have been fully addressed, 111 (69 percent) have been partially addressed, and 31 (19 percent) have not been addressed. More specifically, of the approximately 380 actions identified in our annual reports, 87 (23 percent) have been fully addressed, 187 (49 percent) have been partially addressed, and 104 (28 percent) have not been addressed as of December 2013. Our reports and GAO’s Action Tracker provide details for each of the issues, describing the nature of the problems, what actions have been taken to address them, and what remains to be done to make further progress. While agencies have continued to make progress, important opportunities have yet to be pursued. The details in our reports, along with successful implementation by agencies and continued oversight by Congress, can form a solid foundation for progress to address risks, improve programs and operations, and achieve greater efficiencies and effectiveness. In 2012, OMB collected information from the responsible agencies on the steps they have taken to address our suggested actions. To ensure sustained leadership attention on these actions, OMB also asked the performance improvement officers from responsible agencies to monitor the progress being made. GAO and OMB staff meet throughout the year to discuss the issues identified by our work and the extent to which the administration is working to address the issues. These meetings have been helpful in monitoring progress. However, given that issues of fragmentation, overlap, and duplication often involve multiple agencies, the discussions need to be elevated to include more senior officials who have the responsibility and authority for resolving the crosscutting issues identified. In addition to financial management and widespread fragmentation, overlap, and duplication issues, the federal government must address pressing challenges with its cybersecurity. As computer technology has advanced, federal agencies and our nation’s critical infrastructures such as power distribution, water supply, telecommunications, and emergency services have become increasingly dependent on computerized information systems and electronic data to carry out operations and to process, maintain, and report essential information. The security of these systems and data is essential to protecting national security, economic prosperity, and public health and safety. We have reported that (1) cyber threats to systems supporting government operations and critical infrastructure were evolving and growing, (2) cyber incidents affecting computer systems and networks continue to rise, and (3) the federal government continues to face challenges in a number of key aspects of its approach to cybersecurity, including those related to protecting the nation’s critical infrastructure. For these reasons, federal information security has been on GAO’s list of high-risk areas since 1997; in 2003, we expanded this high-risk area to include cyber critical infrastructure protection. The federal government has taken a variety of actions that are intended to enhance federal and critical infrastructure cybersecurity. For example, the government issued numerous strategy-related documents over the last decade, many of which addressed aspects of the challenge areas we identified. The administration also took steps to enhance various cybersecurity capabilities, including establishing agency performance goals and a tracking mechanism to monitor performance in three cross- agency priority areas. In February 2013, the president issued Presidential Policy Directive 21 on critical infrastructure security and resilience and Executive Order 13,636 on improving critical infrastructure cybersecurity. Improving these capabilities is a step in the right direction, and their effective implementation can enhance federal information security and the cybersecurity and resilience of our nation’s critical infrastructure. However, more needs to be done to accelerate the progress made in bolstering the cybersecurity posture of the nation and federal government. The administration and executive branch agencies need to implement the hundreds of recommendations made by GAO and agency inspectors general to address cyber challenges, resolve known deficiencies, and fully implement effective information security programs. Until then, a broad array of federal assets and operations will remain at risk of fraud, misuse, and disruption, and the nation’s most critical federal and private sector infrastructure systems will remain at increased risk of attack from our adversaries. Congress is considering several bills that are intended, if enacted into law and effectively implemented by the executive branch, to improve cyber information sharing and the cybersecurity posture of the federal government and the nation. In closing, our nation’s long-term fiscal challenges underscore the need for the federal government to operate in an efficient and effective manner. To do so, the federal government must address a number of significant management and governance challenges—many highlighted by our High Risk List and our annual reports on fragmentation, overlap, and duplication. Our work has also highlighted a variety of approaches the executive branch and Congress could take to resolve these issues moving forward. In doing so, it is vital that both branches of government demonstrate the sustained leadership commitment needed to address these challenges. Given the crosscutting nature of many of these challenges, it will be particularly important for OMB to play a leadership role in the Executive Branch. Chairman Carper, Ranking Member Dr. Coburn, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. For further information regarding this testimony, please contact J. Christopher Mihm, Managing Director, Strategic Issues, at (202) 512- 6806 or mihmj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The federal government is one of the world's largest and most diverse entities, with about $3.5 trillion in outlays in fiscal year 2013, funding an extensive array of programs and operations. Moreover, it faces a number of significant fiscal, management, and governance challenges in responding to the varied and increasingly complex issues it seeks to address. This statement focuses on (1) GAO's work related to the President's Management Agenda, and (2) additional opportunities for decision makers to address major management challenges. This statement is primarily based upon our published and ongoing work covering GAO's High Risk List; fragmentation, overlap, and duplication reports; and managing for results work. The work upon which these published reports and preliminary findings were based was conducted in accordance with generally accepted government auditing standards. GAO has made numerous recommendations to OMB and executive branch agencies in these areas and reports in this statement on the status of selected key recommendations. A number of areas on the President's Management Agenda are consistent with issues highlighted by GAO's work on the High Risk Program, its annual reports on fragmentation, overlap, and duplication, and other work related to long-standing management challenges. These include, for example: using information technology to better manage for results; addressing improper payments; expanding strategic sourcing; strengthening strategic human capital management; and improving the Department of Defense's weapon systems and services acquisitions. Lasting success in addressing the difficult and longstanding issues on the Presidents Management Agenda will hinge on effective implementation, including sustained top leadership attention. GAO and the Office of Management and Budget (OMB) have agreed to hold a series of high level meetings on the issues on GAO's High Risk List to discuss progress and actions that are needed to fully address high-risk issues. Further, the executive branch has taken a number of steps to implement key provisions of the GPRA Modernization Act by developing cross-agency and agency priority goals; assigning performance management roles and responsibilities to leadership; conducting agency quarterly performance reviews; and developing Performance.gov, a website that provides quarterly updates on the priority goals. However, additional opportunities exist for decision makers to address major performance management challenges, including, for example: Developing a comprehensive inventory of federal programs. GAO's preliminary review of the program inventories produced by 24 large federal agencies identified concerns about the usefulness of the information provided in these inventories for addressing crosscutting issues. Enhancing the use of collaborative mechanisms. Addressing many of the challenges government faces requires collaboration across agencies, levels of government, or sectors. Yet the mechanisms the federal government uses to collaborate do not always operate effectively. Effectively implementing strategic reviews . Starting in 2014, agency leaders are to annually assess how relevant organizations, programs, and activities, both within and outside of their agencies, are contributing to progress on their strategic objectives and identify corrective actions where progress is lagging. Such reviews could help address fragmentation, overlap, and duplication issues. Improving capacity to gather and use better performance information . GAO's work has found that federal decision makers often lack complete and reliable performance data needed to address the government's management challenges. Furthermore, the administration needs to accelerate progress in (1) addressing major impediments preventing GAO from rendering an opinion on the U.S. government's consolidated financial statements and risks to the government's future financial condition; (2) elevating top leadership attention to the areas identified in our annual reports on fragmentation, overlap, and duplication; and (3) responding to pressing challenges with its cybersecurity, such as evolving cyber threats to systems supporting government operations and critical infrastructure. Congress also has key roles in addressing each of these issues.